Compression algorithm for JSON encoded packets?
Here is a short test on the compressibility of JSON data original: crime-data_geojson.json 72844By (You can get the file here: https://github.com/lsauer/Data-Hub . The file was picked at random but cannot be representative of average JSON data)
except for zip all archiver parameters were set to ultra
* cm/ nanozip:
> 4076/72844
[1] 0.05595519
* gzip:
> 6611/72844
[1] 0.09075559
* LZMA / 7zip
> 5864/72844
[1] 0.0805008
* Huffman / zip:
> 7382/72844
[1] 0.1013398
* ?/Arc:
> 4739/72844
[1] 0.06505683
This means that compression is very high and beneficial. JSON data generally has a high entropy. According to wikipedia
The entropy rate of English text is between 1.0 and 1.5 bits per letter,[1] or as low as 0.6 to 1.3 bits per letter, according to estimates by Shannon based on human experiments
The entropy of JSON data is often well above that. (In an experiment with 10 arbitrary JSON files of roughly equal size i calculated 2.36)
There are two more JSON compression algorithms: CJson & HPack The HPack does a very good job, comparable to gzip compression.
I think two questions will affect your answer:
1) How well can you predict the composition of the data without knowing what will happen on any particular run of the program? For instance, if your packets look like this:
{
"vector": {
"latitude": 16,
"longitude": 18,
"altitude": 20
},
"vector": {
"latitude": -8,
"longitude": 13,
"altitude": -5
},
[... et cetera ...]
}
-- then you would probably get your best compression by creating a hard-coded dictionary of the text strings that keep showing up in your data and replace each occurrence of one of the text strings with the appropriate dictionary index. (Actually, if your data was this regular, you'd probably want to send just the values over the wire and simply write a function into the client to construct a JSON object from the values if a JSON object is needed.)
If you cannot predict which headers will be used, you may need to use LZW, or LZ77, or another method which looks at the data which has already gone through to find the data it can express in an especially compact form. However...
2) Do the packets need to be compressed separately from each other? If so then LZW is definitely not the method you want; it will not have time to build its dictionary up to a size that will give substantial compression results by the end of a single packet. The only chance of getting really substantial compression in this scenario, IMHO, is to use a hard-coded dictionary.
(Addendum to all of the above: as Michael Kohne points out, sending JSON means you're probably sending all text, which means that you're underusing bandwidth that has the capability of sending a much wider range of characters than you're using. However, the problem of how to pack characters that fall into the range 0-127 into containers that hold values 0-255 is fairly simple and I think can be left as "an exercise for the reader", as they say.)