In an environment that supports advanced Unicode features, what exactly do you do with the string length?
I want to make sure that the password is between a given number of characters. Same with phone numbers, email addresses, etc.
This seems to have always been known as the length of the string.
This thread sounds like a bunch of scientists trying to make a simple concept a lot harder to understand.
If you do allow Unicode characters in whatever it is you're validating, then your approach is almost certainly wrong for some valid input.
For exact lengths, you often have a restricted character set (like for phone numbers) and can validate both characters and length with a regex. Or the length in bytes works for 0–9.
Unless you're involved in text layout, you actually usually don't wind up needing the exact length in characters of arbitrary UTF-8 text.
> This seems to have always been known as the length of the string.
Sure. And by this definition, the string discussed in TFA (that consists of a facepalm emoji with a skin tone set) objectively has 5 characters in it, and therefore a length of 5. And it has always had 5 characters in it, since it was first possible to create such a string.
Similarly, "é" has one character in it, but "é" has two despite appearing visually identical. Furthermore, those two strings will not compare equal in any sane programming language without explicit normalization (unless HN's software has normalized them already). If you allow passwords or email addresses to contain things like this, then you have to reckon with that brute fact.
None of this is new. These things have fundamentally been true since the introduction of Unicode in 1991.
do you mean "byte"? or "rune"?