Why is using BufferedInputStream to read a file byte by byte faster than using FileInputStream?
In FileInputStream
, the method read()
reads a single byte. From the source code:
/**
* Reads a byte of data from this input stream. This method blocks
* if no input is yet available.
*
* @return the next byte of data, or <code>-1</code> if the end of the
* file is reached.
* @exception IOException if an I/O error occurs.
*/
public native int read() throws IOException;
This is a native call to the OS which uses the disk to read the single byte. This is a heavy operation.
With a BufferedInputStream
, the method delegates to an overloaded read()
method that reads 8192
amount of bytes and buffers them until they are needed. It still returns only the single byte (but keeps the others in reserve). This way the BufferedInputStream
makes less native calls to the OS to read from the file.
For example, your file is 32768
bytes long. To get all the bytes in memory with a FileInputStream
, you will require 32768
native calls to the OS. With a BufferedInputStream
, you will only require 4
, regardless of the number of read()
calls you will do (still 32768
).
As to how to make it faster, you might want to consider Java 7's NIO FileChannel
class, but I have no evidence to support this.
Note: if you used FileInputStream
's read(byte[], int, int)
method directly instead, with a byte[>8192]
you wouldn't need a BufferedInputStream
wrapping it.
A BufferedInputStream wrapped around a FileInputStream, will request data from the FileInputStream in big chunks (512 bytes or so by default, I think.) Thus if you read 1000 characters one at a time, the FileInputStream will only have to go to the disk twice. This will be much faster!