Why to use iter_content and chunk_size in python requests
From the documentations chunk_size
is size of data, that app will be reading in memory when stream=True
.
For example, if the size of the response is 1000 and chunk_size
set to 100, we split the response into ten chunks.
This is to prevent loading the entire response into memory at once (it also allows you to implement some concurrency while you stream the response so that you can do work while waiting for request to finish).
The purpose of setting streaming request is usually for media. Like try to download a 500 MB .mp4 file using requests, you want to stream the response (and write the stream in chunks of chunk_size
) instead of waiting for all 500mb to be loaded into python at once.
If you want to implement any UI feedback (such as download progress like "downloaded <chunk_size>
bytes..."), you will need to stream and chunk. If your response contains a Content-Size header, you can calculate % completion on every chunk you save too.