It surprises me somewhat just how little is known about text encoding and code pages, by those professionals working with data every day. The fundamentals should be known by every data professional, so this post is intended to cover just that.
Firstly, I know that as an intelligent reader and data professional you know that characters exist in the presentation layer only. Kind of – the glyphs at least. They still exist as a sequence of bits, but depending on the encoding used they can be interpreted in many different ways. It’s up to the client reading the data to know what text encoding is being used so the correct characters can be displayed to the user. Why reiterate this? Well, it’s fundamental to understanding text encoding.
For example, let’s say I encode my alphabet as A=0, B=1, C=2 and so on. I send you the message “SQLTUNA” which would be passed over the wire as 18 16 11 19 20 13 0. Now let’s say you’re using the coding system of A=1, B=2, C=3 and so forth. When you display the message it would come out as “RPKSTM?” where the final character (?) is unknown. I don’t want to be known as RPKSTM?, so for that reason alone it’s important to have a standard encoding system that both parties are aware of.
Good news! We have standards. Lots of them. Without going into too much history, some time in the 1960s the American Standard Code for Information Interchange shortened their name and the ASCII standard was borne. OK, so there may have been a bit of intelligence and thought applied in between, but that’s just semantics.
ASCII maps the main Latin characters to the numbers 0 to 127. Lower and upper case letters, numbers and common punctuation, as well as some non-printable control characters such as the carriage return and tab (numbers 0-31 are reserved for control characters). It all fits neatly into 7 bits. And that’s where the problems started. Once it became commonplace to transfer data in 8-bit sequences, a.k.a. bytes, there was suddenly a “spare” bit opening up another 128 possible characters.
To cut a long story short, the “spare” or “extended” characters were used differently by different entities, mainly desktop providers such as IBM. The printable character set of 32-127 were kept as standard by pretty much everybody, however the extended set for 127-255 were used in different ways, and in some cases the control character range of 0-31 were changed.
Thus code pages came into existence. Allocating a code page to a particular set of characters allowed these encodings to be exchanged and mapped correctly by the recipient. So for example, if you received codes from someone and they told you it was encoded using code page 437 (IBM’s original code page) you would decode it to produce the output below:
201 205 187 13 200 205 188
“A Pretty Box”
However, if you mistakenly used the wrong code page to decode it, such as the Windows 1252 code page, you would end up with the following instead:
“A fraction better than Old Macdonald…”
You get the point. Many code pages have now been made official to describe the various different mappings of characters to codes. They all share the consistency of codes 32-127 from the ASCII standard, but beyond that you need to know the code page you’re dealing with in order to display the text correctly.
What happens if you need more than 256 characters in a set? Some languages such as Japanese have over a thousand characters. That simply ain’t gonna fit into 1 byte. So in stepped Unicode to rescue the situation. Devised around the late 1980s/early 90s, in conjunction with the Universal Character Set (an ISO standard), the intention was to create a mapping for every character in every language. They pretty much succeeded, and currently have mappings for over 110,000 characters!
An important point to note here – Unicode does not tell you how to encode your characters for storage or transmission. It simply maps the characters to code points. They are generally in the form “U+” followed by a hexadecimal number. For example:
U+0053 – S
U+0051 – Q
U+004C – L
U+0054 – T
U+0075 – u
U+006E – n
U+0061 – a
Those of you paying attention will notice that the code points above are using 4-digit hex numbers, equivalent to 2 bytes or 16 bits. That’s 64k possible mappings, which is way below the 110k they have standardised. Well, Unicode provisioned the use of 5 and 6-digit hex numbers (up to 0x10FFFF), meaning they can currently map over 1 million characters should they wish.
Onto the implementations. You need to store and transmit text data, and you want to use Unicode. Multiple different encodings exist for this. UTF-8 has become a bit of a favourite, and UTF-16 (which extends the older UCS-2) is still popular. UTF is the Unicode Transformation Format, with UCS being Universal Character Set. Many different methods exist, but these are probably the most well known.
I’ll start with UCS-2, as it is the simplest to explain. The system basically used a fixed length 2-byte encoding of the Unicode code point. That’s it. Up to 64k characters allowed, which at the time of its creation was sufficient. Taking my code point example above, we would see the following storage in UCS-2:
UTF-16 superseded UCS-2 and extended the implementation to allow code points beyond 0xFFFF. If we go back to where I mentioned the upper range of the current Unicode standard, we have 0x10FFFF. The first byte (0x10) can be thought of as the “plane”, and the 2 bytes following it as the character. Up to 0x10 is allowed, meaning 17 planes. Each 2-byte plane has 0x0000 to 0xFFFF (64k) possible combinations. Plane 0x00 is the first plane, which contains the most common characters and is often referred to as the Basic Multilingual Plane (BMP). Stay with me. UTF-16 uses this in pretty much the same way as UCS-2, and they are almost identical. However UTF-16 makes use of a reserved range, 0xD800 to 0xDFFF, to allow mappings into the other planes. When mapping to other planes, an additional 2 bytes are required for each character, meaning UTF-16 varies between 2-byte and 4-byte encodings depending on whether the character exists within the BMP or not. I’m certainly not going to regurgitate the full implementation when you can read it for yourself online, such as here. Partly because it’s not relevant to this post, but also because I’d have to click on that link to get those details myself…
UTF-8? That’s variable width also. For the ASCII range it uses only one byte. In fact, the first 7 bits are identical to ASCII, as UTF-8 was designed for backwards compatibility, meaning the first 128 characters will appear the same. The 8th bit is used as a flag to signify more bytes are required to store the character, and in that circumstance other flags exist in the encoding to allow 2, 3 or 4 bytes for a character.
UTF-8’s popularity comes from its flexibility. When passing characters in the ASCII range, UTF-8 needs no conversion, it is identical to ASCII and uses a single byte per character. It only requires extra bytes for characters outside this range such as for non-Latin characters and those outside the BMP. The prevalence of English across the web for example, means UTF-8 is mostly 1 byte per character but provides the flexibility to use other character sets. Compare this to UTF-16 which requires at least 2 bytes per character – typically double the size. Across the internet this can make a big difference.
SQL Server has two types of text storage, [var]char and n[var]char. It’s pretty simple really. The first is a 1-byte ASCII encoding, and the second is the 2-byte UCS-2 encoding of Unicode.
The upper 128 characters in the [var]char data type are determined by the Collation of the database, so this becomes very important when moving data in and out. Ultimately the data is stored as a 1-byte number between 0 and 255, but the Collation determines the character and therefore the glyph you see when extracting the data. It can also have an effect on clients/software used to import and export the data. Some software will attempt to convert the text encoding depending on the supplied settings, therefore knowing the encoding is vital to avoid unwanted results.
A great example in the current world, is migrating a database from a local instance to WASD using the SQL Azure Migration Wizard, available on Codeplex here: http://sqlazuremw.codeplex.com/
It’s a great tool that essentially provides a wrapper for the bcp utility, automatically generating bcp commands to extract data from the chosen databases and objects, then creates the objects in Azure and bcp’s the data in. There are also various checks performed in between for compatibility.
Let’s say both our local database and our Windows Azure SQL Database are using the SQL collation SQL_Latin1_General_CP1_CI_AS. Often the local database will use the Windows collation Latin1_General_CI_AS, which is almost identical, however for this example I wanted them the same.
I create a table in the local database for migration, and insert a value:
drop table dbo.migrationtest_local;
create table dbo.migrationtest_local
value varchar(64) not null
create clustered index ix_migrationtest_value on dbo.migrationtest_local (value);
insert into dbo.migrationtest_local
values ('he cc’d his friend''s wife.');
It looks like a normal string being inserted, but look closer. The first apostrophe doesn’t need escaping as it’s a curly apostrophe and actually sits outside the normal character range of 32-127 for this collation. It is actually character 146, or in hex 0x92. I run a quick query to look at the data:
I’ve highlighted the difference above. The 6th character (in red) is our curly apostrophe with a value of 0x92 (146). The 19th character (in blue) is a standard apostrophe with a value of 0x27 (39).
As a comparison I run the same SQL on my WASD, the only difference being the suffix on the table name to identify the platform:
drop table dbo.migrationtest_wasd;
create table dbo.migrationtest_wasd
value varchar(64) not null
create clustered index ix_migrationtest_value on dbo.migrationtest_wasd (value);
insert into dbo.migrationtest_wasd
values ('he cc’d his friend''s wife.');
I run the SELECT to see how the results compare:
Identical, we see the same results.
Now to run the Azure Migration Wizard. Leaving all the defaults, I select the option to analyze/migrate a database, then connect to my local instance and select the relevant database, in my case the database was called DBA.
I select a specific object, namely my new table dbo.migrationtest_local, and generate the SQL. It generates the CREATE TABLE statement, and also displays the bcp commands it will use. Here is my export bcp command:
The first 2 bytes in red are the data length, in reverse order, i.e. we have 0x001A, or 26, bytes. Also highlighted are the two apostrophes, and you can see that they have the same binary value as we saw in the database. All good so far.
Next we connect to WASD via the wizard, select our target database, and execute the SQL and bcp command to create the table and import the data.
I run a SELECT against the new table in my SQL Database (note the table name – it is the same as our local database as the SQL was generated by the wizard):
Uh-oh! That’s not right is it? What has happened? We know the collation is the same, and the data looked correct when we ran the same SQL statements above.
Going back to the SQL Azure Migration Wizard, I grab the bcp command for the import:
The import statement adds a few parameters, such as the credentials for starters which I’ve removed, but also batch size and packet size. However, notice the one key omission? Yup, the –C RAW option is not there. By default, the tool isn’t configured to use it for import. When omitted, bcp defaults to the –C OEM option, or in other words Original Equipment Manufacturer. This means conversion. Not only that, but conversion based on the client, which in our case involves the host nodes in Azure, NOT just our SQL Database.
The converted character value of 0xC6 (198) appears to come from code page 850. Both the curly apostrophe in our database collation AND the diphthong of AE from code page 850, map to the same Unicode code point of U+0092. It seems the host node is using code page 850, most likely in the physical host database, and has converted the binary data before insertion.
The fix? In this case using the –C RAW option for both export and import would work, or even omitting it for both. This can be done in the SQL Azure Migration Wizard config file, or of course you can perform the tasks manually.
Knowing how text is stored and encoded is vital when you work with data, as is recognising the signs that something is wrong, or hasn’t converted correctly. Different software and different machines can interpret text in different ways, so always try to be aware of any non-standard characters in your data and account for them when moving data around.
I have only touched on the subject of text encoding and collations, and will probably write further posts going into more depth, but as a starting point I think every data professional should know the details covered in this post. It may save a lot of head scratching in the future when your text doesn’t look quite how you expected.