awk/sed/perl one liner + how to print only the properties lines from json file
Jq
is the right tool for processing JSON data:
jq '.items[].properties | to_entries[] | "\(.key) : \(.value)"' input.json
The output:
"content : \n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi"
"is_supported_kafka_ranger : true"
"kafka_log_dir : /var/log/kafka"
"kafka_pid_dir : /var/run/kafka"
"kafka_user : kafka"
"kafka_user_nofile_limit : 128000"
"kafka_user_nproc_limit : 65536"
In case if it's really mandatory to obtain each key and value double-quoted - use the following modification:
jq -r '.items[].properties | to_entries[]
| "\"\(.key)\" : \"\(.value | gsub("\n";"\\n"))\","' input.json
The output:
"content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e "/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar" ]; then\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
"is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536",
Please, please don’t get into the habit of parsing structured data with unstructured tools. If you’re parsing XML, JSON, YAML etc., use a specific parser, at least to convert the structured data into a more appropriate form for AWK, sed
, grep
etc.
In this case, gron
would help greatly:
$ gron yourfile | grep -F .properties.
json.items[0].properties.content = "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=/usr/lib/ccache:/home/steve/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n export CLASSPATH=:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n export CLASSPATH=:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi";
json.items[0].properties.is_supported_kafka_ranger = "true";
json.items[0].properties.kafka_log_dir = "/var/log/kafka";
json.items[0].properties.kafka_pid_dir = "/var/run/kafka";
json.items[0].properties.kafka_user = "kafka";
json.items[0].properties.kafka_user_nofile_limit = "128000";
json.items[0].properties.kafka_user_nproc_limit = "65536";
(You can post-process this with | cut -d. -f4- | gron --ungron
to get something very close to your desired output, albeit still as valid JSON.)
jq
is also appropriate.