Using a BufferedInputStream is useful because it cuts down on read system calls. But the same effect can be produced, say in reading a file, if we use FileInputStream's read method that takes a byte array as input. A lot of bytes are read in a single call. Am I right ? Then, why is it that we need to wrap it inside a BufferedInputStream object ?

Whether you want to do the work yourself, or you want to control the read size of the buffer, or you don't want line endings getting hacked off (if using readLine), etc, etc.

I assume you are talking of the advantages of using a BufferedInputStream object to wrap a FileInputStream object. But, we can control the read size of the buffer with FileInputStream too (changing the size of the byte array we pass as argument to it).

Secondly, I couldn't find the readLine() method in the documentation for BufferedInputStream.

Okay, readLine is BufferedReader. In any case, those things I mentioned are using InputStream, BufferedInputStream handles those things for you.

Thanks. You gave 2 main arguments in favour of BufferedInputStream, but :

1. Line endings are still hacked as there is no readLine.

2. We can adjust the buffer size by adjusting the size of the byte array that we pass to the read() function.

Then, why should be use BufferedInputStream ? We can already have the buffering facility via the proper read() function.

> 1. Line endings are still hacked as there is no readLine.

Raw input streams shouldn't be used for reading character data anyways so this is a moot point. Byte based streams don't have a notion of "line" hence no "readLine()" method. You make the mistake of assuming streams would always be a character based stream.

> We can already have the buffering facility via the proper read() function

No, that's not buffering capability; that's the capability of being allowed to read in multiple bytes in a single sweep. In this case, you don't provide buffering but ensure that minimal number of calls are made to the FS by selecting an appropriate byte array size. Each "read()" call would still need to access the File system in your case. In case of buffered streams, it might so happen that the "buffer" size is much more than the size requested in which case, multiple read invocations can be made without touching the file system.

Is all this impossible to do ourselves? No definitely not, but the question which needs to be asked here is that, do we really need to do this ourselves? :-)

Thanks a lot. I got it. One more thing. Buffering helps reduce system calls. So, even when we are reading a small file, we should buffer. But the buffering mechanism must also have its own overhead. Is the overhead so much that we should avoid buffering for small files ? That seems to be the only place where buffering should not be used.

It's a good idea to use a Buffered stream/reader even for small files. That way, if you encounter performance problems later on, you can tweak the size of the buffer and improve performance without changing a lot of code (just changing buffer size).

But like all performance related things, code tweaked to specific scenarios performs much better than generic code. For e.g. in some cases, it might make more sense to read the *entire* file in a single sweep thereby reducing the system calls even further. Profiling is your friend.

This question has already been answered. Start a new discussion instead.