How does Java store UTF-16 characters in its 16-bit char type?

The answer is in the javadoc :

The char data type (and therefore the value that a Character object
encapsulates) are based on the original Unicode specification, which
defined characters as fixed-width 16-bit entities. The Unicode
standard has since been changed to allow for characters whose
representation requires more than 16 bits.

The range of legal code
points is now U+0000 to U+10FFFF, known as Unicode scalar value.
(Refer to the definition of the U+n notation in the Unicode standard.)
The set of characters from U+0000 to U+FFFF is sometimes referred to
as the Basic Multilingual Plane (BMP). Characters whose code points
are greater than U+FFFF are called supplementary characters. The Java
2 platform uses the UTF-16 representation in char arrays and in the
String and StringBuffer classes. In this representation, supplementary
characters are represented as a pair of char values, the first from
the high-surrogates range, (\uD800-\uDBFF), the second from the
low-surrogates range (\uDC00-\uDFFF).

A char value, therefore,
represents Basic Multilingual Plane (BMP) code points, including the
surrogate code points, or code units of the UTF-16 encoding. An int
value represents all Unicode code points, including supplementary code
points. The lower (least significant) 21 bits of int are used to
represent Unicode code points and the upper (most significant) 11 bits
must be zero.

Unless otherwise specified, the behavior with respect to
supplementary characters and surrogate char values is as follows: The
methods that only accept a char value cannot support supplementary
characters. They treat char values from the surrogate ranges as
undefined characters. For example, Character.isLetter(‘\uD840’)
returns false, even though this specific value if followed by any
low-surrogate value in a string would represent a letter. The methods
that accept an int value support all Unicode characters, including
supplementary characters. For example, Character.isLetter(0x2F81A)
returns true because the code point value represents a letter (a CJK
ideograph). In the Java SE API documentation, Unicode code point is
used for character values in the range between U+0000 and U+10FFFF,
and Unicode code unit is used for 16-bit char values that are code
units of the UTF-16 encoding. For more information on Unicode
terminology, refer to the Unicode Glossary.

Simply said :

  • the 16 bits for a char rule was designed for an old version of the Unicode standard
  • you sometimes need two chars to represent a unicode rune (code point) which isn’t in the Basic Multilingual Plane. This kindof “works” because you don’t frequently use chars, especially to handle unicode runes outside the BMP.

Even simpler said :

  • a java char doesn’t represent a Unicode codepoint (well, not always).

As an aside, it can be noted that the evolution of Unicode to extend past the BMP made UTF-16 globally irrelevant, now that UTF-16 doesn’t even enable a fixed byte-chars ratio. That’s why more modern languages are based on UTF-8. This manifesto helps understand it.

Leave a Comment