> This is called Normalization Form Canonical Composition (NFC).
Is there something like a "Round Midnight and No Coffee Form" where the programmer just renders the text to check whether the output of each set of codepoints matches pixel for pixel?
I cooped at IBM in the 1980s and this is how we tested and verified the newest IBM PC hardware (at this point IIRC it would have been around the 286/pc junior/os2/smaller form factor ibm pc designs) by creating tests on current systems with manual keyboard entry, and recording all the key strokes/timing and screen buffers and output files at intervals then running the same tests on new hardware with a playback of recorded input and comparing the new output to the saved runs.
The closest unicode has seems to be NFKC, but last I checked it still didn't correctly handle greek and cyrillic aliases of latin characters, never mind anything more obscure.
The issue does not have anything to do with normalization per se, but stems from the fact that there are Unicode characters that are semantically different, but nevertheless look exactly same in most fonts. For example 'a' (U+0061 LATIN SMALL LETTER A) vs. 'а' (U+0430 CYRILLIC SMALL LETTER A)
It's probably also futile to keep homoglyph tables without the context of the font being used, as т and т are the same letter, just with different styles (and in some fonts resemble T and m respectively in Latin). Unicode mentions homoglyphs briefly in UTR #36, but overall I feel that it's not really their job to solve that issue, as visually similar characters already exist in ASCII and other limited character sets and every context where those things are a problem probably needs to be evaluated differently with different mitigations.
> The examples in this article conform to the C89 standard, but we specify C99 in the Makefile because the ICU header files use C99-style (//) comments.
I found this quite interesting. Perhaps the closest I’ve come to grasping the core concepts, even though I’ve done a fair bit of work around the edges previously.
I think it’s because the library, when standing alone, would need to include the entire Unicode database. What I wonder is, when you install the library from a distribution’s package manager, does it package that data separately (so that it can hopefully be used by related software)?
Probably not. I would guess, from working with similar “static-data-heavy” libraries, that much of the data is “burned into” the libicu in the form of lookup functions that execute native-code representations of decision trees to give their responses. This data is meant to be “used by related software” by just linking libicu and asking it about the data.
Though, this does mean that in language runtimes that don’t want to pull in native libraries (because the runtime is trying to ensure some guarantee like soft-realtime or fault-tolerance), libicu can’t be linked, so the runtime actually has to do the same thing it does, if it wants to have efficient Unicode lookup to the level that people expect: ship a copy of the database files from Unicode.org with the source, and convert them into source-code representations of decision-tree functions.
I've seen many distros provide a unicode-data package, containing the human-readable text files from the Unicode Character Database. These are sometimes handy to have around, but I'd be surprised if much software used them directly.
Reason — tons of data. Contrary to what other commenters said, it’s not (only) Unicode properties (they are tiny), but a lot more: from rules to spelling out the numerics, to transliteration, collation, to word/sentence segmentation (some of which are absolutely non-trivial and sometimes require dictionaries of special cases).
Because it has to store all properties of all valid codepoints 0-0x10fff. It does it via perfect hashes for fastest lookup, not via space saving 3-level arrays as most others do.
I described various implementation strategies here : http://perl11.org/blog/foldcase.html
Many developers believe that that a case-insensitive comparison is achieved by mapping both strings being compared to either upper- or lowercase and then comparing the resulting bytes. The existence of functions such as ‘strcasecmp’ in some C libraries, for example, or common examples in programming books reinforces this belief:
if (strcmp(toupper(foo),toupper(bar))==0) { // a typical caseless comparison
which I guess should be C, but makes no sense at all. The standard functions toupper() and tolower() operate on single characters, not strings. Modifying entire strings in place and returning them also seems odd.
Also the text leading up to the code talks about strcasecmp(), but the code doesn't use it, and claims the existance of strcasecmp() proves that people like to smash the case of strings before comparing them. Of course, strcasecmp() is the exact opposite, it just does a case-insensitive comparison and doesn't say anything about how that is achieved.
Benchmarked? For one lookup, or for repeated lookups?
Hashes have terrible cache locality. Unicode itself has locality, with the greek characters generally separate from the chinese characters and so on. The tree-based and array-based methods take advantage of this locality.
Just guessing, but based on statistics of web pages in asian languages, most text comsists of mostly the lower code points, no matter the language. So then hash lookups end up being pretty much heavily biased towards small subsets of the data. And I wouldn't be surprised if cache sizes of modern processors comspire to accelerate this pretty lopsided distribution of accesses considerably.
I’ve always wondered whether, in the context of segmenting/layout-ing entire Unicode documents (or large streams where you’re willing to buffer kilobytes at a time, like browser page rendering), there’d be an efficiency win for Unicode processing, in:
1. detecting (either heuristically, or using in-band metadata like HTML “lang”) the set of languages in use in the document; and then
2. rewriting the internal representation of the received document/stream-chunk from “an array of codepoints” to “an array of pairs {language ID, offset within a language-specific tokens table}.”
In other words, one could—with knowledge of which languages are in use in a document—denormalize the codepoints that are considered valid members of multiple languages’ alphabet/ideograph sets, into separate tokens for each language they appear in.
Each such token would “inherit” all the properties of the original Unicode codepoint it is a proxy for, but would only have to actually encode such properties as actually matter in the the language it’s a token of.
And, as well, each language would be able to set defaults for the properties of its tokens, such that the tokens would only have to encode the exceptions to the defaults; or there could even be language-specific functions for decoding each property, such that languages could Huffman-compress together the particular properties that apply to them, given known frequencies of those properties among its tokens, making it cheaper to decode properties of commonly-encountered tokens, at the expense of decoding time for rarely-encountered tokens.
And, of course, this would give each language’s tokens data locality, such that the CPU could keep only the data (or embodied decision trees) in cache, for the languages that it’s actually using.
Since each token would know what its codepoint is, so you could map this back to regular Unicode (e.g. UTF-8) when serializing it.
(Yes, I’m sort of talking about reimplementing code pages. But 1. they’d be code pages as materialized views of Unicode, and 2. you’d never expose the code-page representation to the world, only using it in your own text system.)
I don't know why ICU did it that way. libunistring did it a bit better, but they also are too big and not performant enough to power coreutils.
The best approach is currently a hybrid of 3-level arrays and a bsearch in a small list of exceptions. This is about 10x smaller and has the same performance. The properties can be boolean, int or strings, so there's no one-fits all solution.
> Reading lines into internal UTF-16 representation
Fail.
> It’s unwise to use UTF-32 to store strings in memory. In this encoding it’s true that every code unit can hold a full codepoint.
wchar_t is 32 bits on a number of platforms such as GNU/Linux, MacOS and Solaris. It behooves you to use that, and all the associated library functionality, rather than roll your own.
Curiously, the paragraph before the line "it's unwise to use UTF-32" ends with "Use the encoding preferred by your library and convert to/from UTF-8 at the edges of the program."
And that is the best advice there is. If you have a choice, use UTF-8, otherwise use whatever your libraries use.
Unless you have very special needs, forget about UTF-16.
Even on Windows it's best to keep your text in UTF-8 and convert it to and from UTF-16 when interacting with win32 APIs. Java, dotNet and JavaScript are the worst of all worlds because you're both stuck with wide characters (in their native string types) and have the intricacies of UTF-16 to consider. I guess the advice might have been better phrased as "Unless you're forced to, or have very special needs, stay away from UTF-16".
> it's best to keep your text in UTF-8 and convert it to and from UTF-16 when interacting with win32 APIs
It’s extra source code to write and then support, extra machine code to execute, and likely extra memory to malloc/free. Too slow, in my book automatically means “not best”.
> Java, dotNet and JavaScript are the worst of all worlds because you're both stuck with wide characters (in their native string types) and have the intricacies of UTF-16 to consider.
Just a normal UTF-16, like in WinAPI and many other popular languages, frameworks and libraries. E.g. QT is used a lot in the wild.
> the advice might have been better phrased as
It says exactly the opposite, “Use the encoding preferred by your library and convert to/from UTF-8 at the edges of the program.”
Spot on. When coding against "raw" win32 API (or NT kernel APIs and perhaps rare native usermode NT API), using UTF-16 is the only way to keep your sanity. Converting strings back and forth between UTF-8 and UTF-16 in that kind of case is just senseless waste of CPU cycles.
One API call might take multiple strings and each conversion often means memory allocation and freeing — something you usually try to avoid as much as possible if it's something that's going to run most of the time the system is powered on.
The situation can be different in cross-platform code. In those cases, UTF-8 is a preferable abstraction.
Just don't use it for filenames. Filenames are just bags of bytes on at least on Windows (well, 16-bit WCHARs, but the idea is same) and Linux, and considering them anything else is not a great idea.
When you’re writing code that you 100% sure won’t ever become a performance bottleneck, you still care about time of development. Very often, unless it’s a throwaway code, also about cost of support.
Writing any code at all when that code is not needed is always too slow, this is regardless of any technical factors.
Very little code in this world is needed. Much of it is, however, useful.
The person you replied to obviously isn't advocating for something they find useless.
Perhaps you could have instead asked "Why do you recommend doing this? I don't understand the benefit." But instead, you decided that they're advocating to do something useless for no reason.
> you decided that they're advocating to do something useless for no reason.
No, I decided they’re advocating to do something harmful for no reason.
They're advocating to waste hardware resources (as a developer I don’t like doing that), waste development time (as a manager I don’t like when developers do that). But the worst of all, UTF8 on Windows and converting to/from UTF16 at WinAPI boundary is a source of bugs, the kernel doesn’t guarantee the bytes you get from these APIs are valid UTF16, quite the opposite, it guarantees to treat them as opaque chunk of words.
UTF-8 has it’s place even on Windows, e.g. it makes sense for some network services, and even for RAM data when you know it’ll be 99% English so it saves resources, and that data never hits WinAPI. But as soon as you’re consuming WinAPI, COM, UWP, windows shell, any other native stuff, UTF-8 is just not good.
That very much depends on what you're doing. Constantly reencoding between UTF-16 and UTF-8 would be pointless. Not to mention that "UTF-16" on Windows usually means UCS-2, so you risk losing information if you reencode.
But if your application's strings are mostly independent of the WinAPI then sure, use UTF-8 and only convert when absolutely necessary.
It’s works exactly the same way on Linux. Neither kernel nor file system changes the bytes passed from userspace to kernel, regardless whether they are valid UTF-8 or not.
Pass invalid UTF-8 file name, and these exact bytes will be written to the drive. https://www.kernel.org/doc/html/latest/admin-guide/ext4.html says “the file name provided by userspace is a byte-per-byte match to what is actually written in the disk”
It's not exactly the same on Linux, because Linux doesn't have duplicate pairs of system calls for one-byte character strings and wide strings. Linux system calls are all char strings: null-terminated arrays of bytes. It's a very clear model.
Any interpretation of path-names as multi-byte character set data is up to user space.
> Linux doesn't have duplicate pairs of system calls for one-byte character strings and wide strings.
Neither is Windows. These “DoSomethingA” APIs aren’t system calls, they’re translated into Unicode-only NtDoSomething system calls, implemented in the kernel by OS or kernel mode drivers as ZwDoSomething.
Windows system calls all operate on null-terminated arrays of 16-bit integers. It's a very clear model. Any interpretation of path names as characters is up to user space.
I don't know what you're talking about. You said Windows uses UTF-16 and pointed to Wikipedia. I'm only pointing out that that's only true by convention. Windows, even today, does not require that its file names be UTF-16.
Whether Linux analogously does the same or not (indeed it does) isn't something I was contesting.
The file system / object manager is only one part of the whole, though. Object names and namespaces in general will have that restriction, but in user-space there's a lot of Unicode that's treated as text, not a bag of code units. And those things are UTF-16.
And the article you’ve linked says “file system treats path and file names as an opaque sequence of WCHARs.” This means no information is lost in the kernel, either.
Indeed, kernel doesn’t validate nor normalize these WCHARs, but should it? I would be very surprised if I ask an OS kernel to create a file, and it silently changed the name doing some Unicode normalization.
I'm sorry if I was unclear but my point was that when you receive a string from the Windows API you cannot make any assumptions about it being valid UTF-16. Therefore converting it to UTF-8 is potentially lossy. So if you then convert it back from UTF-8 to UTF-16 and feed it to the WinAPI you'll get unexpected results. Which is why I feel converting back and forth all the time is risky.
This is one reason why the WTF-8[0] encoding was created as a UTF-8 like encoding that supports invalid unicode.
I know, and I was replying to the comment saying that UTF-16 is something that’s very rarely needed.
Personally, when working with strings in RAM, I have slight preference towards UTF-16, 2 reasons:
1. When handling non-Western languages in UTF-8, branch prediction fails all the time. Spaces and punctuations use 1 byte/character, everything else 2-3 bytes/character in UTF-8. With UTF-16 it’s 99% 2 bytes/character, surrogate pairs are very rare, i.e. simple sequential non-vectorized code is likely to be faster for UTF-16.
2. When handling east Asian languages, UTF-16 uses less RAM, these languages use 3 bytes/character in UTF-8, 2 bytes/character in UTF-16.
But that’s only slight preference. In 99% cases I use whatever strings are native on the platform, or will require minimum amount of work to integrate. When doing native Linux development this often means UTF-8, on Windows it’s UTF-16.
This is the correct answer. There's no need for UTF-16 unless you're fixing up code that uses UCS-2. UTF-32 doesn't buy you anything other than bloat. In all cases you have to deal with graphemes that consist of multiple codepoints, so even UTF-32 is a sort of variable length encoding, which is why it buys you nothing but bloat.
UTF-8 is reasonably easy to deal with and very interoperable.
Is there something like a "Round Midnight and No Coffee Form" where the programmer just renders the text to check whether the output of each set of codepoints matches pixel for pixel?