ByteBuffer.allocate() vs. ByteBuffer.allocateDirect()

Ron Hitches in his excellent book Java NIO seems to offer what I thought could be a good answer to your question:

Operating systems perform I/O
operations on memory areas. These
memory areas, as far as the operating
system is concerned, are contiguous
sequences of bytes. It’s no surprise
then that only byte buffers are
eligible to participate in I/O
operations. Also recall that the
operating system will directly access
the address space of the process, in
this case the JVM process, to transfer
the data. This means that memory areas
that are targets of I/O perations must
be contiguous sequences of bytes. In
the JVM, an array of bytes may not be
stored contiguously in memory, or the
Garbage Collector could move it at any
time. Arrays are objects in Java, and
the way data is stored inside that
object could vary from one JVM
implementation to another.

For this reason, the notion of a
direct buffer was introduced. Direct
buffers are intended for interaction
with channels and native I/O routines.
They make a best effort to store the
byte elements in a memory area that a
channel can use for direct, or raw,
access by using native code to tell
the operating system to drain or fill
the memory area directly.

Direct byte buffers are usually the
best choice for I/O operations. By
design, they support the most
efficient I/O mechanism available to
the JVM. Nondirect byte buffers can be
passed to channels, but doing so may
incur a performance penalty. It’s
usually not possible for a nondirect
buffer to be the target of a native
I/O operation. If you pass a nondirect
ByteBuffer object to a channel for
write, the channel may implicitly do
the following on each call:

  1. Create a temporary direct ByteBuffer
    object.
  2. Copy the content of the nondirect
    buffer to the temporary buffer.
  3. Perform the low-level I/O operation
    using the temporary buffer.
  4. The temporary buffer object goes out
    of scope and is eventually garbage
    collected.

This can potentially result in buffer
copying and object churn on every I/O,
which are exactly the sorts of things
we’d like to avoid. However, depending
on the implementation, things may not
be this bad. The runtime will likely
cache and reuse direct buffers or
perform other clever tricks to boost
throughput. If you’re simply creating
a buffer for one-time use, the
difference is not significant. On the
other hand, if you will be using the
buffer repeatedly in a
high-performance scenario, you’re
better off allocating direct buffers
and reusing them.

Direct buffers are optimal for I/O,
but they may be more expensive to
create than nondirect byte buffers.
The memory used by direct buffers is
allocated by calling through to
native, operating system-specific
code, bypassing the standard JVM heap.
Setting up and tearing down direct
buffers could be significantly more
expensive than heap-resident buffers,
depending on the host operating system
and JVM implementation. The
memory-storage areas of direct buffers
are not subject to garbage collection
because they are outside the standard
JVM heap.

The performance tradeoffs of using
direct versus nondirect buffers can
vary widely by JVM, operating system,
and code design. By allocating memory
outside the heap, you may subject your
application to additional forces of
which the JVM is unaware. When
bringing additional moving parts into
play, make sure that you’re achieving
the desired effect. I recommend the
old software maxim: first make it
work, then make it fast. Don’t worry
too much about optimization up front;
concentrate first on correctness. The
JVM implementation may be able to
perform buffer caching or other
optimizations that will give you the
performance you need without a lot of
unnecessary effort on your part.

Leave a Comment