But over the past few years/decades, as glyphs have exploded in number, I've been finding myself wanting a totally different system of font definition.
I want fonts to be made out of just a handful (at a minimum) of primitives -- e.g. here's a vertical stroke, a horizontal stroke, diagonal strokes, what various serif endings look like, here are what the bowls of letters look like, etc.
An entire Latin alphabet can easily be extrapolated from that -- but also Cyrillic, for example. And then you can go further, and specify elements used only in Greek, and so on...
But the idea being that there's a "default", boring, generic universal font template that covers all glyphs. And that a type designer simply modifies parameters as desired.
Now the designer can go as far as they want, with individual glyph adjustments overriding the automatically-generated ones, custom kerning pairs and whatnot.
But the benefit would be that, even if they don't, then for a Latin alphabet you already get Cyrillic, Greek, Arabic, Chinese, Japanese, etc. characters that match well (in x-height, stroke width, etc. as needed).
The more I work with typography, the more I think static font outlines for each glyph are the wrong way to represent things. Smart parametized outlines are what we need.
> the idea being that there's a "default", boring, generic universal font template that covers all glyphs
> a type designer simply modifies parameters
Have you looked through the Unicode code charts lately and really thought about how many primitives you'd need in order to create glyphs not just for Latin/Cyrillic/Greek, but also for Arabic, Telugu, Thai, Lao, Javanese, Mongolian, etc., etc.? Never mind the miscellaneous symbols and dingbats, and the ever-growing collection of emoji....
I don't think this is a realistic proposition.
Within a closely-related group of scripts -- like Latin/Cyrillic/Greek -- and within a constrained range of styles, yes, it's possible: e.g., see Knuth's METAFONT and the various font families that have been created with it.
I'm not that deep into designing typefaces, but I'd expect that many designer already work in a similar mode like GP suggested. The only difference is that you'd encode the ruleset instead of the outcome.
The most unusual characteristic of Computer Modern, however, is the fact that it is a complete type family designed with Knuth's Metafont system, one of the few typefaces developed in this way. The Computer Modern source files are governed by 62 distinct parameters, controlling the widths and heights of various elements, the presence of serifs or old-style numerals, whether dots such as the dot on the "i" are square or rounded, and the degree of "superness" in the bowls of lowercase letters such as "g" and "o". This allows Metafont designs to be processed in unusual ways; Knuth has shown effects such as morphing in demonstrations, where one font slowly transitions into another over the course of a text. While it attracted attention for the concept, Metafont has been used by few other font designers; by 1996 Knuth commented "asking an artist to become enough of a mathematician to understand how to write a font with 60 parameters is too much" while digital-period font designer Jonathan Hoefler commented in 2015 that "Knuth's idea that letters start with skeletal forms is flawed".
For a small sample of font source code, see https://en.wikipedia.org/wiki/Metafont
Edit: for an idea about what variants can be generated, see https://s3-us-west-2.amazonaws.com/visiblelanguage/pdf/16.1/....
https://en.wikipedia.org/wiki/Multiple_master_fonts and https://en.wikipedia.org/wiki/Variable_fonts are younger, but also don’t quite work. It’s very hard/not really possible to design parametrized fonts that look well across wide parameter ranges.
In my mind, however, the concept is less about providing parameter values (like Computer Modern), but more about actually drawing the parts. In other words it's less about variable/multiple masters, but "here's what the end of a vertical serif looks like at this weight", "here's what a full vertical stem looks like". (Though if you're extrapolating from Latin to Arabic where virtually nothing is in common, then it might be closer to a multiple masters situation.)
Maybe it's too hard to do, I can't be sure. But I've also seen deep neural-network-generated fonts posted here that have intrigued me... and even if it's too hard to simply "assemble" fonts from pieces and get a good result, I can't help but wonder if deep learning applied to 10,000's of typefaces couldn't be a tool in making that happen effectively.
The "two-story" "a" is default, but set a bit to choose the single-story version. Similarly with a lowercase "g" and so on.
(And of course plenty of existing fonts already give variants for these kinds of things too -- so the user could even still choose an alternate one, even if the font designer never designed it!)
I was about to suggest you try it, but I just went on the website and it seems it has been sunsetted, which is a shame. It was a very interesting idea — you could create your own fonts by changing sliders which would change all the different features of the font.
I backed their Kickstarter project [2] in 2014, but never really used it after their development builds. I suppose I imagined myself being able to design all these different typefaces, but when I was actually given the ability to, I preferred to stick to ones other people had created.
[1] https://www.prototypo.io/ [2] https://www.kickstarter.com/projects/prototypoapp/prototypo-...
Tom7, who makes excellent videos about computers, has one about splitting letters up to create anagrams in this way.[0]
All of his videos are technically astounding, like the one where he frankensteined a raspberry pi into a game cartridge to play SNES games on a NES, or the one where he designed and built a computer using IEEE 754 NaN and infinity instead of 0 and 1.
[0] https://medium.com/@mwichary/stanislaw-lem-on-google-s-homep...
When we wanted to add support for Unicode usernames, we had to include support for generating avatar images with those extra possible characters. It was quite fun creating the fallback system this blog post talks about and trying to support as many real languages as possible. The Noto project was really helpful, not only the fonts but also the pythons scripts they provide for dumping the codepoints covered by a font[^2].
You can check the fallback logic at https://github.com/discourse/letter-avatars/blob/master/lib/...
1. https://meta.discourse.org/t/switch-from-gravatar-to-html-cs...
2. https://github.com/googlei18n/nototools/blob/master/nototool...
Why would someone fallback from a serif font (Joanna) to Helvetica, Arial, sans-serif?
I would much rather have your custom font and then sans-serif, with nothing in between. Let me use your web font, or use my sans-serif, which is probably one of the fonts you listed in your fallback order anyway, but this way if I’ve expressed my own preference (I have) then these other Helveticas and Arials don’t get in the way.
The real problem with the fallback stack is that there’s no way of adjusting things depending on the font used. Perhaps web font A is about 10% larger than system font B, you really want to be able to say “bump the font size and line height by 10% if you have to fall back to B”; but you can’t.
So for simple fallbacks (of the “if this font is unavailable” type, rather than the “if this glyph is unavailable in this font” type), the main reason to add fonts between your font and serif/sans-serif is to match sizes better.
Then in the case of monospace, the problem is that some (most?) platforms have a default that most find ugly (e.g. Courier New), so I can more readily understand people preferring to opt into prettier fonts offered by the platforms, even if I wish they wouldn’t do it.
I've never personally seen it done, but couldn't you do this with the Unicode Private-Use Area? Assuming you have your own blog where you control the CSS, you could sprinkle in a few private-use codepoints, and then add a custom web font to the CSS font fallback that defines glyphs for those same codepoints.
(I know putting a document with Private-Use codepoints up on the public web would go against the philosophy of Unicode; and that the correct thing to do here would be to email the Unicode Consortium about the need for your character. [They'll probably agree!] But, ignoring the philosophy, in practice this would still work. And at least you're not directly confusing machines that try to extract semantics from the text, the way e.g. Wingdings fonts do.)
• Screen readers won’t be able to do anything with it;
• Some mobile users especially will turn off web font loading;
• Even if custom fonts are enabled, it’s not terribly uncommon for them to fail to load for any number of reasons;
• It’s best for performance if you can do something like `font-display: optional` or `font-display: fallback`; but if you are actually depending on glyphs in your font, you can’t do this. (For that font, anyway; if you use a custom font for just that range, you could still make any custom body text font optional, though at the cost of an extra request for the private use font rather than it being embedded in the rest.)
Private-use codepoints at least have the advantage over inline images of being “opaquely” copy-and-paste-able into other documents, machine-read, etc. Any system that works in term of Unicode text will pass along the private-use codepoints in the stream, where it might strip higher-level out-of-band features like images.
As such, private-use codepoints are the analogous feature to the .notdef glyph in fonts, but for machine semantics rather than for human comprehension. In both cases, the “reader” (human/machine) gets something that it knows is there but doesn’t recognize, but knows is valid, and can opaquely be preserved and passed along, and potentially “made legible” through the lens of a different eye than theirs.
One place I would see this as being useful is in the display of unique not-yet-formalized emoji in chat systems. Copy-and-pasting such “text” out of the system would just get you opaque PUA codepoints; but if you emailed such “text” to somebody, and then they copy-and-pasted it back into the chat system, they’d see the same emoji you saw originally. It’s like a public URL representing a private document that you have to be logged into the relevant system to “access.”
—————
The real negative of the Private-Use Area codepoints, from a conservationist/archivist perspective, is that unlike HTML images that each have a distinct—if opaque—URL, the Unicode Private-Use Area is quite limited, and so prone to collisions in usage.
If the Consortium had instead come up with a stringing scheme such that any private-use glyph was actually formed from a sequence of private-use combining codepoints [sort of like the flag combiners] to form e.g. a full encoded UUID representing the PUA codepoint, then various organizations could actually generate private non-colliding codepoints without a need for registration using e.g. UUIDv4, and then be able to rely on the assumption that such codepoints will only have semantics under their private system—and any other system that wants to be explicitly compatible with their system; rather than those codepoints potentially having other, incompatible meanings in other systems that just happen to reuse them, as happens today.
Interestingly, such Private-Use UUID codepoint-sequences could then later be “adopted” into Unicode through a formal process. People who had created documents that used such meta-codepoints could register them with the Consortium, where the Consortium would 1. create “official” codepoints for those same semantics; and 2. ship a regularly-updated database file mapping meta-codepoints to later officially-registered codepoints. One pass of Unicode normalization would then involve using that database to replace private-use UUID codepoint-sequences with their registered full codepoint.
Basically, this would take the thing that happened as a series of one-off events with Unicode codepage embeddings, and turn it into a continuous ongoing fine-grained process that anyone can take advantage of.
For example stuff in this article doesn't render well for me (see the caption of the image): https://imgur.com/a/aFDSKyG
It is rational that you only need to spend resources on installing this, if you need it - you can save a few hundred megabytes, at least.
Maybe fonts-ancient-scripts is the one you need here.