How do I use the 'json' module to read in one JSON object at a time?

Here is a slight modification of Martijn Pieters' solution, which will handle JSON strings separated with whitespace.

def json_parse(fileobj, decoder=json.JSONDecoder(), buffersize=2048, 
               delimiters=None):
    remainder = ''
    for chunk in iter(functools.partial(fileobj.read, buffersize), ''):
        remainder += chunk
        while remainder:
            try:
                stripped = remainder.strip(delimiters)
                result, index = decoder.raw_decode(stripped)
                yield result
                remainder = stripped[index:]
            except ValueError:
                # Not enough data to decode, read more
                break

For example, if data.txt contains JSON strings separated by a space:

{"business_id": "1", "Accepts Credit Cards": true, "Price Range": 1, "type": "food"} {"business_id": "2", "Accepts Credit Cards": true, "Price Range": 2, "type": "cloth"} {"business_id": "3", "Accepts Credit Cards": false, "Price Range": 3, "type": "sports"}

then

In [47]: list(json_parse(open('data')))
Out[47]: 
[{u'Accepts Credit Cards': True,
  u'Price Range': 1,
  u'business_id': u'1',
  u'type': u'food'},
 {u'Accepts Credit Cards': True,
  u'Price Range': 2,
  u'business_id': u'2',
  u'type': u'cloth'},
 {u'Accepts Credit Cards': False,
  u'Price Range': 3,
  u'business_id': u'3',
  u'type': u'sports'}]

If your JSON documents contains a list of objects, and you want to read one object one-at-a-time, you can use the iterative JSON parser ijson for the job. It will only read more content from the file when it needs to decode the next object.

Note that you should use it with the YAJL library, otherwise you will likely not see any performance increase.

That being said, unless your file is really big, reading it completely into memory and then parsing it with the normal JSON module will probably still be the best option.


Generally speaking, putting more than one JSON object into a file makes that file invalid, broken JSON. That said, you can still parse data in chunks using the JSONDecoder.raw_decode() method.

The following will yield complete objects as the parser finds them:

from json import JSONDecoder
from functools import partial


def json_parse(fileobj, decoder=JSONDecoder(), buffersize=2048):
    buffer = ''
    for chunk in iter(partial(fileobj.read, buffersize), ''):
         buffer += chunk
         while buffer:
             try:
                 result, index = decoder.raw_decode(buffer)
                 yield result
                 buffer = buffer[index:].lstrip()
             except ValueError:
                 # Not enough data to decode, read more
                 break

This function will read chunks from the given file object in buffersize chunks, and have the decoder object parse whole JSON objects from the buffer. Each parsed object is yielded to the caller.

Use it like this:

with open('yourfilename', 'r') as infh:
    for data in json_parse(infh):
        # process object

Use this only if your JSON objects are written to a file back-to-back, with no newlines in between. If you do have newlines, and each JSON object is limited to a single line, you have a JSON Lines document, in which case you can use Loading and parsing a JSON file with multiple JSON objects in Python instead.

Tags:

Python

Json