python multiprocessing - process hangs on join for large queue
The qout
queue in the subprocess gets full. The data you put in it from foo()
doesn't fit in the buffer of the OS's pipes used internally, so the subprocess blocks trying to fit more data. But the parent process is not reading this data: it is simply blocked too, waiting for the subprocess to finish. This is a typical deadlock.
There must be a limit on the size of queues. Consider the following modification:
from multiprocessing import Process, Queue
def foo(qin,qout):
while True:
bar = qin.get()
if bar is None:
break
#qout.put({'bar':bar})
if __name__=='__main__':
import sys
qin=Queue()
qout=Queue() ## POSITION 1
for i in range(100):
#qout=Queue() ## POSITION 2
worker=Process(target=foo,args=(qin,))
worker.start()
for j in range(1000):
x=i*100+j
print x
sys.stdout.flush()
qin.put(x**2)
qin.put(None)
worker.join()
print 'Done!'
This works as-is (with qout.put
line commented out). If you try to save all 100000 results, then qout
becomes too large: if I uncomment out the qout.put({'bar':bar})
in foo
, and leave the definition of qout
in POSITION 1, the code hangs. If, however, I move qout
definition to POSITION 2, then the script finishes.
So in short, you have to be careful that neither qin
nor qout
becomes too large. (See also: Multiprocessing Queue maxsize limit is 32767)