I like this way of measuring extra work. Is this standard at Google?
And there are internal calculators that tell you how much CPU, memory, network etc a SWE-year gets you. Same for other internal units, like the cost of a particular DB.
This allows you to make time/resource tradeoffs. Spending half an engineer’s year to save 0.5 SWE-y of CPU is not a great ROI. But if you get 10 SWE out of it, it’s probably a great idea.
I personally have used it to argue that we shouldn’t spend 2 weeks of engineering time to save a TB of DB disk space. The cost of the disk comes to less than a SWE-hour per year!
An example being Google unilaterally flipping on VP8/VP9 decode, which at that time purely decoded on the CPU or experimentally on the GPU.
It saved Google a few CPU cycles and some bandwidth but it nuked every user's CPU and battery. And the amount of energy YouTube consumed wholesale (so servers + clients) skyrocketed. Tragedy of the Commons.
The main economic unit for most engineers is FTE not $.
Windows/macOS don't seem to have a way to enable proper hinting anymore [0], and even FreeType (since 2.7 [1]) defaults to improper hinting (they call it "subpixel hinting", which doesn't make sense to me in theory, and practically still seems blurry, as if it's unhinted).
In case anyone's wondering what properly hinted text looks like, here's a screenshot [2]. This currently relies on setting the environment variable `FREETYPE_PROPERTIES=truetype:interpreter-version=35`, possibly some other configuration through fontconfig, and using a font with good hinting instructions (eg, DejaVu Sans and DejaVu Sans Mono in the screenshot).
My suspicion is that Windows moved away from font hinting after XP because it's very hard to create fonts with good sets of hinting instructions (aiui, OTF effectively only allows something like "autohinting"), so in the modern world of designer fonts it's easier to just have everyone look at blurry text. Some other minor reasons would be that UI scaling in Windows sometimes (or always?) introduces blurring anyway, and viewing raster images on screens of varying resolutions also introduces scaling blur.
[0] Windows still has a way to enable it, but it disables antialiasing at the same time. This is using an option called "Smooth edges of screen fonts" in "Performance Options"). This basically makes it look like XP, which imo is an improvement, but not as good as FreeType which can do hinting and antialiasing at the same time.
[1] https://freetype.org/freetype2/docs/hinting/subpixel-hinting...
[2] https://gist.githubusercontent.com/Maxdamantus/3a58d8e764b29...
I'm not an expert, but - I'm sorry, it's not.
The point of hinting is to change the shape of the glyphs so the rasterized result looks "better". What "better" is, of course, purely subjective, but most people would agree that it's better when perceived thicknesses of strokes and gaps are uniform, and the text is less blurry, so the eye can discern the shapes faster. I don't think that your rendering scores high points in that regard.
I'll take a phrase from your rendering: "it usually pays" [0]. I don't like it, I'm sorry. The hinter can't make up its mind if the stroke width should be two pixels wide, or three pixels with faint pixels on the sides and an intense one in the center - therefore, the perceived thicknesses vary, which increases eye strain; "l"s are noticeably different; "ys" at the end clumped together into one blurry thing; and there's a completely unnecessary line of faint pixels on the bottom of the characters, which hinting should have prevented.
The second line is how it looks on Windows on 150% scale. Verdana is a different font, so it's an unfair comparison (Verdana was designed specifically to look good on low resolutions), and the rainbows can be off-putting, but I still think the hinter tucks the shapes into pixels better.
Maybe I don't understand something, or maybe there's a mistake.
Here's an updated version of your image, with the actual pixel data from my image copied in, at 8x and 1x scale [0]. It should be possible to see the pixels yourself if you load it into a tool like GIMP, which preserves pixels as you zoom in.
It should be fairly clear from the image above that the hinting causes the outlines (particularly, horizontal and vertical lines) to align to the pixel grid, which, as you say, both makes the line widths uniform and makes the text less blurry (by reducing the need for antialiasing; SSAA is basically the gold standard for antialiasing, which involves rendering at a higher resolution and then downscaling, meaning a single physical pixel corresponds to an average of multiple pixels from the original image).
Out of interest, I've done a bit of processing [1] to your image to see what ClearType is actually doing [2], and as described in the FreeType post I linked, it seems like it is indeed using vertical hints (so the horizontal lines don't have the colours next to them—this is obvious from your picture), and it seems like it is indeed ignoring any horizontal hints, since the grey levels around the vertical lines are inconsistent, and the image still looks horizontally blurry.
I might as well also link to this demo I started putting together a few years ago [3]. It uses FreeType to render with hinting within the browser, and it should handle different DPIs as long as `devicePixelRatio` is correct. I think this should work on desktop browsers as long as they're scaling aware (I think this is the case with UI scaling on Windows but not on macOS). Mobile browsers tend to give nonsense values here since they don't want to change it as the user zooms in/out. Since it's using FreeType alone without something like HarfBuzz, maybe some of the positioning is not optimal.
[0] https://gist.githubusercontent.com/Maxdamantus/3a58d8e764b29...
[1] After jiggling the image around and converting it back from 8x to 1x, I used this command to show each RGB subpixel in greyscale (assuming the typical R-G-B pixel layout used on LCD computer monitors):
width=137; height=53; stream /tmp/image_1x.png - | xxd -p | sed 's/../&&&/g' | xxd -r -p | convert -size $((width*3))x$((height)) -depth 8 rgb:- -interpolate nearest-neighbor -interpolative-resize $((100*8))%x$((300*8))% /tmp/image_8x_rgb.png
[2] https://gist.githubusercontent.com/Maxdamantus/3a58d8e764b29...That said, one of the horrors of using the old font stacks, and in many ways the very thing that drove me away from fighting Linux desktop configurations for now about 10y of freedom from pain, was dealing with other related disasters. First it's a fight to even get to things being consistent, and as seen in your screenshot there's inconsistency between the content renders, the title, and the address bar. Worse though the kerning in `Yes` in your screenshot is just too bad for constant use for me.
I hope as we usher in a new generation of font tech, that we can finally reach consistency. I really want to see these engines used on Windows and macOS as well, where they're currently only used as fall-back, because I want them to present extremely high quality results on all platforms, and then I can use them in desktop app work and stop having constant fights with these subsystems. I'm fed up with it after many decades, I just want to be able to recommend a solution that works and have those slowly become the consistently correct output for everyone everywhere.
Maybe if what freetype is pushing for (fonts are WASM binaries) continues to take hold, and encompass more of fonts, we'll find more consistency over time though
Admittedly I haven't looked into how the setup is configured, and haven't tried it in Chrome, so maybe it's still possible to enable full hinting as intended by the font somehow.
[0] https://gist.githubusercontent.com/Maxdamantus/3a58d8e764b29...
[1] https://gist.githubusercontent.com/Maxdamantus/3a58d8e764b29...
I'm hoping (but not sure) Skrifa will support hinting (though I'm not sure how it interacts with fontconfig). I noticed your screenshot uses full hinting (a subjective choice I currently don't use on my machines), presumably with integer glyph positioning/advance which isn't scale-independent (neither is Windows GDI), though this is quite a nonstandard configuration IMO.
You don't have to do the Big Rewrite™, you can simply migrate components one by one instead.
My understanding is that Microsoft chose Go precisely to avoid having to do a full rewrite. Of all the “modern” native/AoT compiled languages (Rust, Swift, Go, Zig) Go has the most straightforward 1:1 mapping in semantics with the original TypeScript/JavaScript, so that a tool-assisted translation of the whole codebase is feasible with bug-for-bug compatibility, and minimal support/utility code.
It would be of course _possible_ to port/translate it to any language (Including Rust) but you would essentially end up implementing a small JavaScript runtime and GC, with none or very little of the safety guarantees provided by Rust. (Rust's ownership model generally favors drastically different architectures.)
It was about being able to have two codebases (old and new) that are so structurally similar, that it won't be a big deal to keep updating both
In entirely unrelated news, I think Chrome should totally switch engines from V8 to a Rust built one, like hmm... our / my Nova JavaScript engine! j/k
Great stuff on the font front, thank you for the articles, Rust/C++ interop work, and keep it up!
Go is also memory safe.
PS. Note that unlike most languages, a datarace on something like an int in go isn't undefined behavior, just non-deterministic and discouraged.
Skia is made in C++. It's made by Google.
There's FreeType. It actually measures and renders glyphs, simultaneously supporting various antialiasing modes, hinting, kerning, interpreting TrueType bytecode and other things.
FreeType is made in C. It's not made by Google.
Question: why was it FreeType that got a Rust rewrite first?
Skia tendrils run deep and leak all over the place.
There's also a different set of work to invest in, next-generation Skia is likely to look quite different, moving much of the work on to the GPU, and this work is being researched and developed: https://github.com/linebender/vello. Some presentations about this work too: https://youtu.be/mmW_RbTyj8c https://youtu.be/OvfNipIcRiQ
But I don't really know anything about how all the parts fit together; just speculating.
Some third-party tools exist to tweak how ClearType works, like MacType[1] or Better ClearType Tuner[2]. Unfortunately, these tools don't work in Chrome/electron, which seems to implement its own font rendering. Reading this, I guess that's through FreeType.
I hope that as new panel technologies start becoming more prevalent, that somebody takes the initiative to help define a standard for communicating subpixel layouts from displays to the graphics layer, which text (or graphics) rendering engines can then make use of to improve type hinting. I do see some efforts in that area from Blur Busters[3] (the UFO Test guy), but still not much recognition from vendors.
Note I'm still learning about this topic, so please let me know if I'm mistaken about any points here.
[1] https://github.com/snowie2000/mactype
Personally I don't bother anymore anyway since I have a HiDPI display (about 200dpi, 4K@24"). I think that's a better solution, simply have enough pixels to look smooth. It's what phones do too of course.
Windows I believe still uses RGB subpixel AA, because OLED monitor users still need to tweak ClearType settings to make text not look bad.
But it only applies to Linux, where the small fonts can be made look crisp this way. Windows AA is worse, small fonts are a bit more blurred on the same screen, and amcOS is the worst: connecting a 24" FHD screen to an MBP ives really horrible font rendering, unless you make fonts really large. I suppose it's because macOS does not do subpxel AA at all, and assumes high DPI screens only.
Subpixel text rendering was removed from MacOS some time ago, though, presumably because they decided it was not needed on retina screens. Maybe you're thinking of that?
Pretty sure phones do grayscale AA.
“Whole pixel/grayscale antialiasing” should be enough and then specialized display controller would handle the rest
That being the case, it may end up being that in the near future, alternative arrangements end up being abandoned and become one of the many quirky “stepping stone” technologies that litter display technology history. While it’s still a good idea to support them better in software, that might put into context why there hasn’t been more efforts put into doing so.
It’s G’s _business_ folks (ie, C-level executives) that I have no respect for. Their business model of exploiting users is just awful.
(in reality Google is investing a lot of effort into automating the FFI layer to make it safer and less tedious)
Are font formats so bad that the files need to be sanitized?
Also, note that the identified integer overflows as one of causes of vulnerabilities. It is sad that today many languages do not detect overflows, and even modern architectures like RISC-V do not include overflow traps although detecting an overflow doesn't require many logic gates. C is old, ok, but why new languages like Rust do not have overflow traps, I cannot understand. Don't Rust developers know about this thing?
Concidering Rust is just tooling and coding practice in front of LLVM IR does this statement not also include Rust? There are in fact formally verified C and C++ programs, does that formal verification also count as tooling and coding practice and therefore not apply?
If either of the above is true why does it matter at all?
I am specifically calling out your blanket statement and want to open uo discussion about it because at present your implied point was it is impossible to write safe code in C/C++ and it is only possible in Rust, however the very point you made would also apply to Rust.
There are also non-safety issues that may affect the integrity of a program. I recently again looked into Rust, haven't given up just yet but to just instantiate a WGPU project the amount of incidental complexity is mind boggling. I haven't explored OpenGL but concidering that the unofficial webgpu guide for rust [1] recommends using an older version of winit because the current version would require significant rewrites due to API changes is not encouraging. Never mind the massive incidental complexity of needing an async runtime for webgpu itself, is this a pattern I am going to see in different parts of Rust. Rust already has enough complexity without injecting coroutines in places where blocking functions are reasonable.
1. bugs in new code
2. bugs in old code that have survived years of scrutiny
Their conclusion from this is that they're generally greenlighting Rust for new projects but not requiring a rewrite for old projects because they believe rewrites are likely to introduce new category-1 bugs (regardless of the target language). But new code always has category-1 bugs, so if they write it in Rust they cut down on the number of possible category-2 bugs thanks to the static constraints imposed by the type system.
I'm assuming the font module got rewritten into Rust because they planned to rewrite it period (i.e. their analysis suggests the current status quo of how it's built and maintained is generating too many bugs, and Google tends to be very NIH about third-party dependencies so their knee-jerk response would be "solve that by writing our own").
It's always better to get new memory-safe code in to replace old memory-unsafe code, when you can, but the prioritization here is a little more complex.
So though FreeType is carefully written w.r.t. correctness, it was not meant to deal with malicious input and that robustness is hard to put in afterwards.
Or can admit themselves as fallible.
Or realize that even if they are near-infallible, that unless they've studied the C _and_ C++ standards to the finest detail they will probably unintentionally produce code at some point that the modern C++ compilers will manage to make vulnerable in the name of undefined behaviors optimizations (see the recent HN article about how modern C/C++ compilers has a tendency to turn fixed-time multiplications into variable time and vulnerable to timing attacks).
But of course, there is always tons of people that believe that they are better than the "average schmuck" and never produce vulnerable code.
[1] https://gitlab.freedesktop.org/freetype/freetype/
See comments under this video: https://youtu.be/gG4BJ23BFBE
We are all humans, and humans make mistakes. The obvious way to avoid mistakes is to formalize "correct code" and get code chekced by machines, and in this context, let the compiler guarantee the correctness. Anything else will never be as effective.
... then it's not all obvious anymore. In these situations you'd rather drop down to assembly than go up to sth like Rust.
I'm currently doing my 2nd take on a userspace allocator (fixed-size pages but of different sizes, running 32-bit too) as well probably my 8th take or so on a GUI toolkit. I've experimented with lots of approaches, but it always seems to come back to removing abstractions, because those seem to kill my work output. A big reason is they blow up code size and make code (by and large) less understandable, as well as harder to change because of the sheer size.
Not necessarily saying that approach doesn't lead to more potential security issues. I mostly just turn up warnings, not doing fuzzing etc. But it seems to be a more pragmatic way for me to achieve functioning (and robust-in-practice) software at all.
Obviously a lot younger than Freetype.
It would also be a lot less important if we did not do silly things like loading fonts from random websites.
Sillicon Valley devs clearly believe that machine translation is a magical technology that completely solved internationalization, like in Star Trek. No, it's far from it. Ever since internet has been flooded with automatically translated garbage, the experience of international users, at least bilingual ones, got much worse.
This despite Google already knowing what languages I can read…
I'm trying to learn the language of the country I now live in. And yet, Google thinks they know better than me, my preference, at the moment.
And this preference is quite circumstantial, mind you.
If you're a global company it would be silly to assume all your readers speak/read a single language. AI translations (assuming that's what this is) are not perfect, but better than nothing.
I get how poor translation could be irritating for bilingual people who notice the quality difference though, but I guess you guys have the advantage of being able to switch the language.
Worst still Google Maps will insist on only showing street names, area names, an directions in Swedish.
This is not a just Silicon Valley problem though; Redmond had similar issues if you just use the newer parts of Windows in a non-English language.
edit: I'm actually not seeing the C FFI, looking at the library. It must be there somewhere, because it's being used from a C++ codebase. Can somebody point to the extern C use? I'm finding this inside that `fontations` repository:
> rg 'extern "'
fontations/fauntlet/src/font/freetype.rs
161:extern "C" fn ft_move_to(to: *const FT_Vector, user: *mut c_void) -> c_int {
168:extern "C" fn ft_line_to(to: *const FT_Vector, user: *mut c_void) -> c_int {
175:extern "C" fn ft_conic_to(
187:extern "C" fn ft_cubic_to(
>It might help if there was some way to play around with the APIs without having to wait so long for it to build. But as far as i know that's not currently possible.
Its age is completely irrelevant other than as demonstration that it did what it set out to do, and still performs that job as intended for people who actually work on, and with fonts.