Comparison of Unicode encodings
From Wikipedia, the free encyclopedia
This article compares Unicode encodings. Two situations are considered: eight-bit-clean environments and environments like Simple Mail Transfer Protocol that forbid use of byte values that have the high bit set. Originally such prohibitions were to allow for links that used only seven data bits, but they remain in the standards and so software must generate messages that comply with the restrictions. Standard Compression Scheme for Unicode and Binary Ordered Compression for Unicode are excluded from the comparison tables because it is difficult to simply quantify their size.
Contents |
[edit] Compatibility Issues
A UTF-8 file that only contains ASCII characters is identical to an ASCII file.
UTF-16 and UTF-32 are incompatible with ASCII files. Unicode-aware programs are required to display, print and manipulate them.
This means that UTF-16 systems such as Windows and Java represent text objects such as program code as 8 bit ASCII, not UTF-16. Indeed it is very rare[citation needed] to find a UTF-16 encoded text file on any system unless it is part of some more complex structure. One counterexample is the "strings" file used by Mac OS X 10.3 applications for lookup of internationalized versions of messages, these default to UTF16 and "files encoded using UTF-8 are not guaranteed to work. When in doubt, encode the file using UTF-16"[1].
XML is normally encoded as UTF-8, rarely if ever in UTF-16.
Further, UTF-16 files contain many nulls which is incompatible with normal C string handling. This means that programs need to be specially written to handle UTF-16 files. On the other hand, legacy programs can generally handle UTF-8 encoded files even if they contain non-ASCII characters.
[edit] Size issues
UTF-32 requires four bytes to encode any character. Since characters outside the basic multilingual plane (BMP) are typically rare, a document encoded in UTF-32 will often be nearly twice as large as its UTF-16–encoded equivalent because UTF-16 uses two bytes for the characters inside the BMP, or four bytes otherwise.
UTF-8 uses anywhere between one and four bytes to encode a character. It requires one byte for ASCII characters, making it half the space of UTF-16 for texts consisting mostly of ASCII. For other Latin characters and many non-Latin alphabets it requires two bytes, the same as UTF-16. Only a few frequently used Western characters in the range U+0800 to U+FFFF, such as the € sign U+20AC, require three bytes in UTF-8. Characters outside of the BMP above U+FFFF need four bytes in UTF-8 and UTF-16.
All printable characters in UTF-EBCDIC use at least as many bytes as in UTF-8 and most use more, due to a decision made to allow encoding the C1 control codes as single bytes.
For seven-bit environments, UTF-7 clearly wins over the combination of other Unicode encodings with quoted printable or base64.
[edit] Processing Issues
For processing, a format should be easy to search, truncate, and generally process safely. All normal unicode encodings use some form of fixed size code unit. Depending on the format and the code point to be encoded one or more of these code units will represent a Unicode code point. To allow easy searching and truncation a sequence must not occur within a longer sequence or across the boundary of two other sequences. UTF-8, UTF-16, UTF-32 and UTF-EBCDIC have these important properties but UTF-7 and GB18030 do not.
Fixed-size characters can be helpful, but it should be remembered that even if there is a fixed byte count per code point (as in UTF-32), there is not a fixed byte count per displayed character due to combining characters. If you are working with a particular API heavily and that API has standardised on a particular Unicode encoding it is generally a good idea to use the encoding that the API does to avoid the need to convert before every call to the API. Similarly if you are writing server side software it may simplify matters to use the same format for processing that you are communicating in.
UTF-16 is popular because many APIs date to the time when Unicode was 16-bit fixed width. Unfortunately using UTF-16 makes characters outside the Basic Multilingual Plane a special case which increases the risk of oversights related to their handling. That said, programs that mishandle surrogate pairs probably also have problems with combining sequences, so using UTF-32 is unlikely to solve the more general problem of poor handling of multi-code-unit characters.
[edit] For communication and storage
UTF-16 and UTF-32 are not byte oriented and so a byte order must be selected when transmitting them over a byte oriented network or storing them in a byte oriented file. This may be achieved by standardising on a single byte order, by specifying the endianness as part of external metadata (for example the MIME charset registry has distinct UTF-16BE and UTF-16LE registrations as well as the plain UTF-16 one) or by using a Byte Order Mark at the start of the text. UTF-8 does not have this problem.
If the bytestream is subject to corruption then some encodings recover better than others. UTF-8 and UTF-EBCDIC are best in this regard as they can always resynchronise at the start of the next good character. UTF-16 and UTF-32 will handle corrupt bytes well (again recovering on the next good character) but a lost byte will garble all following text. GB18030 may be thrown out of sync by a corrupt or missing byte and has no designed in recovery.
[edit] In detail
The tables below list the number of bytes per code point for different Unicode ranges. Any additional comments needed are included in the table. The figures assume that overheads at the start and end of the block of text are negligible.
N.B. The tables below list numbers of bytes per code point, not per user visible "character" (or "grapheme cluster"). It can take multiple code points to describe a single grapheme cluster, so even in UTF-32, care must be taken when splitting or concatenating strings.
[edit] Eight-bit environments
| Code range (hexadecimal) | UTF-8 | UTF-16 | UTF-32 | UTF-EBCDIC | GB18030 |
|---|---|---|---|---|---|
| 000000 – 00007F | 1 | 2 | 4 | 1 | 1 |
| 000080 – 00009F | 2 | 2 for characters inherited from GB2312/GBK (e.g. most Chinese characters) 4 for everything else. |
|||
| 0000A0 – 0003FF | 2 | ||||
| 000400 – 0007FF | 3 | ||||
| 000800 – 003FFF | 3 | ||||
| 004000 – 00FFFF | 4 | ||||
| 010000 – 03FFFF | 4 | 4 | 4 | ||
| 040000 – 10FFFF | 5 |
[edit] Seven-bit environments
This table may not cover every special case and so should be used for estimation and comparison only. To accurately determine the size of text in an encoding, see the actual specifications.
| code range (hexadecimal) | UTF-7 | UTF-8 quoted printable | UTF-8 base64 | UTF-16 quoted printable | UTF-16 base64 | UTF-32 quoted printable | UTF-32 base64 | GB18030 quoted printable | GB18030 base64 |
| 000000 – 000032 | same as 000080–00FFFF | 3 | 1⅓ | 6 | 2⅔ | 12 | 5⅓ | 3 | 1⅓ |
| 000033 – 00003C | 1 for "direct characters" and possibly "optional direct characters" (depending on the encoder setting) 2 for +, otherwise same as 000080–00FFFF | 1 | 4 | 10 | 1 | ||||
| 00003D (equals sign) | 3 | 6 | 12 | 3 | |||||
| 00003E – 00007E | 1 | 4 | 10 | 1 | |||||
| 00007F | 5 for an isolated case inside a run of single byte characters. For runs 2⅔ per character plus padding to make it a whole number of bytes plus two to start and finish the run | 3 | 6 | 12 | 3 | ||||
| 000080 – 0007FF | 6 | 2⅔ | 2–6 depending on if the byte values need to be escaped | 8–12 depending on if the final two byte values need to be escaped | 4–6 for characters inherited from GB2312/GBK (e.g. most Chinese characters) 8 for everything else. |
2⅔ for characters inherited from GB2312/GBK (e.g. most Chinese characters) 5⅓ for everything else. |
|||
| 000800 – 00FFFF | 9 | 4 | |||||||
| 010000 – 10FFFF | 8 for isolated case, 5⅓ per character plus padding to integer plus 2 for a run | 12 | 5⅓ | 8–12 depending on if the low bytes of the surrogates need to be escaped. | 5⅓ | 8 | 5⅓ |
[edit] Historical: UTF-5 and UTF-6
Proposals have been made for a UTF-5 and UTF-6 for the internationalization of domain names (IDN). The UTF-5 proposal used a base 32 encoding, where Punycode is (among other things, and not exactly) a base 36 encoding. 25 = 32 explains the name UTF-5 for a code unit of 5 bits.[2] The UTF-6 proposal added a running length encoding to UTF-5, here 6 simply stands for UTF-5 plus 1.[3] The IETF IDN WG later adopted the more efficient Punycode for this purpose.[4].
[edit] Not being seriously pursued: UTF-9 and UTF-18
RFC 4042 specifies "UTF-9 and UTF-18 Efficient Transformation Formats of Unicode", but is not being actively pursued. It was released on April 1, 2005 as an April Fools' Day RFC and is of marginal use, for example, in computers with 36-bit word lengths.
[edit] Others: UTF-1 and BOCU-1
See UTF-1 for a comparison of UTF-1, BOCU-1, and UTF-8.

