parallel file parsing, multiple CPU cores
cPython does not provide the threading model you are looking for easily. You can get something similar using the multiprocessing
module and a process pool
such a solution could look something like this:
def worker(lines):
"""Make a dict out of the parsed, supplied lines"""
result = {}
for line in lines.split('\n'):
k, v = parse(line)
result[k] = v
return result
if __name__ == '__main__':
# configurable options. different values may work better.
numthreads = 8
numlines = 100
lines = open('input.txt').readlines()
# create the process pool
pool = multiprocessing.Pool(processes=numthreads)
# map the list of lines into a list of result dicts
result_list = pool.map(worker,
(lines[line:line+numlines] for line in xrange(0,len(lines),numlines) ) )
# reduce the result dicts into a single dict
result = {}
map(result.update, result_list)
This can be done using Ray, which is a library for writing parallel and distributed Python.
To run the code below, first create input.txt
as follows.
printf "1\n2\n3\n4\n5\n6\n" > input.txt
Then you can process the file in parallel by adding the @ray.remote
decorator to the parse
function and executing many copies in parallel as follows
import ray
import time
ray.init()
@ray.remote
def parse(line):
time.sleep(1)
return 'key' + str(line), 'value'
# Submit all of the "parse" tasks in parallel and wait for the results.
keys_and_values = ray.get([parse.remote(line) for line in open('input.txt')])
# Create a dictionary out of the results.
result = dict(keys_and_values)
Note that the optimal way to do this will depend on how long it takes to run the parse
function. If it takes one second (as above), then parsing one line per Ray task makes sense. If it takes 1 millisecond, then it probably makes sense to parse a bunch of lines (e.g., 100) per Ray task.
Your script is simple enough that the multiprocessing module can also be used, however as soon as you want to do anything more complicated or want to leverage multiple machines instead of just one machine, then it will be much easier with Ray.
See the Ray documentation.
- split the file in 8 smaller files
- launch a separate script to process each file
- join the results
Why that's the best way...
- That's simple and easy - you don't have to program in any way different from linear processing.
- You have the best performance by launching a small number of long-running processes.
- The OS will deal with context switching and IO multiplexing so you don't have to worry about this stuff (the OS does a good job).
- You can scale to multiple machines, without changing the code at all
- ...