Each row of your input is a valid JSON object - so if you don't care about CSV headers, you could simply deconstruct them back into arrays and pass them through the @csv
filter:
$ jq -r '[.[]] | @csv' file
false,233,1669647142,6523.896,4494.82,2029.076,8841.63
false,235,1669647152,6523.896,4494.82,2029.076,8841.63
false,235,1669647596,6523.896,4494.82,2029.076,8841.63
false,233,1669651191,6524.496,4495.42,2029.076,8841.63
false,276,1669654797,6524.816,4495.74,2029.076,8841.63
false,437,1669658393,6525.901,4496.825,2029.076,8841.63
false,362,1669661992,6526.732,4497.656,2029.076,8841.63
false,471,1669665603,6527.062,4497.986,2029.076,8841.63
If you do care about headers, it's more complicated. The best I could come up with is:
- slurp the whole file into an indexed array of rows
- perform a reduction of the array of rows, initializing it with the keys extracted from the 0th element
You now have an array-of-arrays, with a header array at the top, that may be mapped back to an array of CSV (and finally to individual CSV rows):
$ jq -r --slurp 'to_entries |
reduce . as $row ([.[0] | .value | keys_unsorted]; . + [$row[] | .value | to_entries | map(.value)]) |
map(@csv) | .[]
' file
"Outdated","Watt","Timestamp","A_Plus","A_Plus_HT","A_Plus_NT","A_Minus"
false,233,1669647142,6523.896,4494.82,2029.076,8841.63
false,235,1669647152,6523.896,4494.82,2029.076,8841.63
false,235,1669647596,6523.896,4494.82,2029.076,8841.63
false,233,1669651191,6524.496,4495.42,2029.076,8841.63
false,276,1669654797,6524.816,4495.74,2029.076,8841.63
false,437,1669658393,6525.901,4496.825,2029.076,8841.63
false,362,1669661992,6526.732,4497.656,2029.076,8841.63
false,471,1669665603,6527.062,4497.986,2029.076,8841.63