jq - add objects from file into json array
jq
has a flag for feeding actual JSON contents with its --argjson
flag. What you need to do is, store the content of the first JSON file in a variable in jq
's context and update it in the second JSON
jq --argjson groupInfo "$(<input.json)" '.[].groups += [$groupInfo]' orig.json
The part "$(<input.json)"
is shell re-direction construct to output the contents of the file given and with the argument to --argjson
it is stored in the variable groupInfo
. Now you add it to the groups
array in the actual filter part.
Putting it in another way, the above solution is equivalent of doing this
jq --argjson groupInfo '{"id": 9,"version": 0,"lastUpdTs": 1532371267968,"name": "Training" }' \
'.[].groups += [$groupInfo]' orig.json
This is the exact case that the input
function is for:
input
and inputs [...] read from the same sources (e.g., stdin, files named on the command-line) as jq itself. These two builtins, and jq’s own reading actions, can be interleaved with each other.
That is, jq
reads an object/value in from the file and executes the pipeline on it, and anywhere input
appears the next input is read in and is used as the result of the function.
That means you can do:
jq '.[].groups += [input]' orig.json input.json
with exactly the command you've written already, plus input
as the value. The input
expression will evaluate to the (first) object read from the next file in the argument list, in this case the entire contents of input.json
.
If you have multiple items to insert you can use inputs
instead with the same meaning. It will apply across a single or multiple files from the command line equally, and [inputs]
represents all the file bodies as an array.
It's also possible to interleave things to process multiple orig
files, each with one companion file inserted, but separating the outputs would be a hassle.
New versions of jq
support slurpfile
This achieves the result you are looking for: jq '.[].groups += $inputs' orig.json --slurpfile inputs input.json
Tested in jq 1.6