Humans work well with ambiguity and context. You know that when your coworker says "Bob's birthday is this weekend" you know she means her husband Bob, not Bob from accounting who nobody likes. And you even prefer that system to having an unambiguous human identifier, even a friendly one like "Bob-4592-daring-weasel-horseradish".
Machines, on the other hand, hate ambiguity and context. Every bit of context is an extra bit of state that has to be stored somewhere, and now all your results are actually statistical guesses - how inelegant!
In the early days of computing, there was no separation between the internals of the machine and its interface. If you worked on a computer, you were as much the mechanic as the driver. We got used to usernames, filenames, and hostnames because they were a decent compromise; they were meaningful enough to humans, and unambiguous enough for machines, so we could use them as a kind of human-computer pidgin.
But we don't need them anymore, and they were never really very good at either job anyway. Google's (probably accidental) discovery was that we were using the web wrong. Everyone was building web directories and portals because they thought that URLs weren't discoverable, but the real problem was that they weren't usable. Search was the first human interface to the web.
So Google's going to kill the URL, Facebook's going to kill the username, and someone (apparently not Microsoft) is going to kill the filename. There'll be much wailing and gnashing of teeth from the old guard while it happens, but someday our grandchildren will grow up never having to memorise an arbitrary sequence of characters for a computer, and I think that's a future to look forward to.
When I ask my car to call my wife using only her first name, it suggests a list of 3 people who I'm not even sure how they got in my contacts list. Siri, on the other hand, gets it right every time with the exact same request. I wouldn't say my car hates ambiguity, the programmers failed to bridge the gap to human/machine interaction and meet the person halfway. ("If you want to talk to a computer, you have to think like one.")
I'd say it's programmers or deadlines that mean that the extra work of accounting for ambiguous data gets skipped. It doesn't take a neural net to look at the recently called list for the most frequent or even most recently dialed [wife's first name].
One irony of your "Bob" example is that sometimes using someone's last name actually adds ambiguity: "It's Bob Lingendorfer's birthday this weekend!" ... "Who is Bob Lingendorfer? ... Ohhh, you mean your husband!".
Maybe it's not irony, it's just that people read a lot into data and might assume that all of it is relevant to the task at hand. My car kind of does the opposite and lazily stops at the first three "close enough" hits on my wife's name.
And since computing is so centralized these days, this means that whatever company made the software needs to know that context about you too.
There's something to be said for computers staying dumb. I'm okay with my co-workers knowing my social graph well enough to recognize my spouse's first name by context. I'm not okay with faceless corporations or governments having that same information.
darn computers!
Usernames reflect a fundamental human desire to create an alter ego free from the burden of their legal name and the socioeconomic context they're in. If Samual Clemens were a blogger, he would write under the username @marktwain. Alonso Quixano might call himself @donquixote69. Anakin Skywalker will want to be known (and feared) as @darth_vader, not because his real name is unusable, but because he prefers to be called Darth Vader.
People have had titles and pseudonyms for ages. Usernames are a continuation of this tradition, not merely an invention of the 20th century. The global uniqueness requirement is of course rather silly, but enforcing a real-name policy on everyone is just as silly. If our grandchildren have no concept of usernames/handles/whatever, it might be more a sign of great oppression and loss of privacy than of technological progress.
Ditto for filenames. We programmers have a habit of using weird filenames that really do look like arbitrary sequences of characters, but most of the rest of the world just uses human-readable filenames like "Financial report 3Q 2017". Change a few numbers inside, and it's still "Financial report 3Q 2017", content-addressing be damned. The document might not be stored as a physical file in the future, but then again, have files ever been physical? Filenames are just labels that we stick on a logical chunk of information. Implementation details can differ, but the concept itself is not going anywhere as long as humans like to put stable labels on mutable things. (This, unfortunately, tends to escape notice when your concept art for a filename-less system only contains a handful of photographs with pretty thumbnails.)
This is the point that I think is completely lost on the author of the article, probably because of a focus on API design. It's a good thing that we can replace that dog-eared copy of Moby Dick with a shiny new one when the time comes, and our users don't need to change their URLs.
APIs are intended to be used primarily by machines, so it's fine for the URL structure to preference the predictable uniqueness of ids. However, for most URLs intended for use by humans, the forces are different.
A human-readable URL is not a pointer, it's a symlink.
Not so sure about the "well" part there. I've encountered people who love to make guesses about the context (and others who actually wish you'd do the same). That coupled with ambiguity creates disasters varying from ordering the wrong lunch to broken relationships.
I'd rather have humans take less pride in being ambiguous and make attempts to be as precise as possible.
So there's this whole ambiguity aversion spectrum. Maybe it correlates to the autism spectrum, maybe it doesn't. It's arguably much more important. Even in mathematics you have Poincaré, a demigod among men that kept publishing papers with significant mistakes, while in the social sciences you have people like Niklas Luhmann and Bruno Latour who approach their subjects with utmost precision and dedication to detail.
I'm a more ambiguous, big-picture-even-in-small-problems thinker; and I thrive with more detail-oriented coworkers that walk me through the trees as I walk them through the forest. This has a lot to do with me being able to think in very ambiguous terms and narrow down as needed to interact or provide for the needs of others. Left to my own devices I come up with extremely abstract philosophical theories that are not useful at all! Conversely left to their own devices precision people become paperclip optimizers.
I want to speculate further into "edgy" territory: maybe the whole gender divide that seems to come up in psychometrics and the labor market and so on is really an ambiguity/precision divide. The evolution of technology has actually increased the value of ambiguity, as computers do much of the precision work for us -- maybe making tech "woman-friendly" is rather about identifying those big-picture/detail-oriented complementarities.
The TLDR of TFA is that an API can support both human-meaningful and machine-meaningful URLs.
Not if we kill them first!
For at least the past decade, advertisement in Japan has been showing people which search term to enter to find the website instead of a url.
Both R and Perl seem like ones where it wouldn’t be extremely strange for the function to also look back to the context of the calling function. Then it could find out if the two parties had an affinity for this person, and whether it was a conversation about something like figuring out an excuse to miss a party or one like finding a gift in order to which Bob.
You could easily have a bijective encoding at a frontend proxy that translates between the above and e.g.
> "4592-13f7-de41-203a"
(i.e. discards the descriptive part of the slug, and then reverses the unique words back into their index-positions in the same static 64k-word dictionary used for generation, resulting in a regular UUID.)
So i'm not sure it's a win
This allows for easy URL readability, while also having a unique ID.
In the context of this post (the library example) that would look like
library.com/books/1as03jf08e/Moby-Dick/
1) there are now an infinite number of URLs for every one of your pages that may end up separately stored on various services (mitigated for only some kinds of service if you redirect to correct),
2) if the title changes the URLs distributed are now permanently wrong as they stored part of the content (and if you redirect to correct, can lead to temporary loops due to caches),
3) the URL is now extremely long and since most users don't know if a given website does this weird "part of the URL is meaningless" thing there are tons of ways of manually sharing the URL that are now extremely laborious,
4) have now made content that users think should somehow be "readable" but which doesn't even try to be canonical... so users who share the links will think "the person can read the URL, so I won't include more context" and the person receiving the links thinks "and the URL has the title, which I can trust more than what some random user adds".
The only website I have ever seen which I feel truly understands that people misuse and abuse title slugs and actively forces people to not use them is Hacker News (which truncates all URLs in a way I find glorious), which is why I am going to link to this question on Stack Exchange that will hopefully give you some better context "manually".
meta.stackexchange.com/questions/148454/why-do-stack-overflow-links-sometimes-not-work/
Many web browsers don't even show the URL anymore: the pretense that the URL should somehow be readable is increasingly difficult to defend. A URL should sometimes still be short and easy to type, but these title slug URLs don't have that property in spades.
If anything, other critical properties of a URL are that they are permanent and canonical, and neither of these properties tend to be satisfied well by websites that go with title slugs, and while including the ID in there mitigates the problem it leaves it in some confusing middle-land where part of the URL has this property and part of it doesn't.
If you are going to insist upon doing this, how about doing it using a # on the page, so at least everyone had a chance to know that it is extra, random data that can be dropped from the URL without penalty and might not come from the website and so shouldn't be trusted?
(edit to add:) BTW, if you didn't know you could do this, Twitter is most epic source of "part of the URL has no meaning" that I have ever run across as almost no one realizes it due to where it is placed in the URL.
twitter.com/realDonaldTrump/status/247076674074718208
No need to redirect, that's what canonical links are for:
https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types
I don't disagree in that I mostly dislike URL slugs, too. Except for some hub pages ("photos", "blog", etc.), a numerical ID is more than enough. But the combination of ordering and display modes and filtering can still amount to a huge number of combinations, so canonical links are still needed - to have as many options for the user as possible and allow them all to be bookmarked, but also give search engines a hint on what minor permutations they can ignore safely.
I wish search engines would completely ignore words in the URL. If it's not in the page (or the "metadata" of actual content on pages linking to it, and so on), screw the URL. If it is in the page (and the URL), you don't need the URL. As long as they are incentivized, we'll have fugly URL schemes.
3) is not a problem for hyperlinks (url not visible) or for even direct links (not burdensome length), and if you care about a short url an even shorter form is available
4) seems like a feature? the person sending the link will only ever include as much information as they deem necessary anyway. If the recipient wants more info they'll either request it or click the link.
Trust is an interesting point, but if you can equally put literally anything in the client side anchor (eg. meta.stackexchange.com/questions/148454/#definitely-not-a-rick-roll) so I don't see what a viable alternative would be.
> If you are going to insist upon doing this, how about doing it using a # on the page, so at least everyone had a chance to know that it is extra, random data that can be dropped from the URL without penalty and might not come from the website and so shouldn't be trusted?
The fragment doesn't get indexed by search engines so not many will see it. Along with that, in my understanding, having something human readable in the URL helps with SEO in at least google an bing so doing this could hurt your search rankings which isn't a good thing.
2: no. The URL is not wrong. Rather it won’t describe the content perfectly anymore. If this is an issue you can attribute a new ID to your page.
3: that’s why you have url shorteners. But what’s wrong with a long url? And how does it complicates sharing it? To share you copy/paste the url. Nothing changed. And now the url describes the content! (That’s the reason we do it.)
4: that’s a good thing!
So yeah. I’ll keep doing this for my blog and I hope websites like SO keep doing that as well
I think I have a defense for this. I consistently long press links on mobile to see the url before deciding whether to load the page or not. Just to see if I can be bothered.
I'm missing something -- what does length have to do with the difficulty of sharing a URL? I can't remember the last time I typed out any URL past the TLD.
Sometimes I call this a URL black hole.
In all fairness, black holes are everywhere when you consider that most web servers ignore unrecognized query params for routing. Examine this URL:
https://news.ycombinator.com/item?t=choosing-between-names-a...
Of course the difference is that Hacker News doesn't disseminate URLs of that form, but that doesn't mean someone couldn't pollute the internet with them.
What services? Web crawlers? I'm sure the ones I would care about are smart enough to know how this works. There are many ways infinite valid URLs can be made. Query params, subdomains and hashroutes to name a few.
> if the title changes the URLs distributed are now permanently wrong as they stored part of the content (and if you redirect to correct, can lead to temporary loops due to caches),
You don't redirect. The server doesn't even look at the slug part of the URL for routing purposes. You can change the url with javascript post-load if it bothers you (as stackoverflow does). Cache loops are an entirely avoidable problem here.
> the URL is now extremely long and since most users don't know if a given website does this weird "part of the URL is meaningless" thing there are tons of ways of manually sharing the URL that are now extremely laborious
Extremely long and extremely laborious seems a bit of an exaggeration. Most users copy and paste, no? Adding a few characters of a human readable tag doesn't warrant this response I feel. Especially when the benefit means that if I copy and paste a url into someplace, I can quickly error-check it to make sure it's the title I mean. When using the share button, the de-slugged URL can be given.
> users who share the links will think "the person can read the URL, so I won't include more context" and the person receiving the links thinks "and the URL has the title, which I can trust more than what some random user adds".
I guess? I wont bother with a rebuttal because this issue seems so minor. The benefit far outweighs some users maybe providing less context because the link url made them do it. If someone says "My typescript wont compile because of my constructor overloading or something please help", I can send stuff like:
stackoverflow.com/questions/35998629/typescript-constructor-overload-with-empty-constructor
stackoverflow.com/questions/26155054/how-can-i-do-constructor-overloading-in-a-derived-class-in-typescript
which I think is so much more useful than just IDs.
> Many web browsers don't even show the URL anymore: the pretense that the URL should somehow be readable is increasingly difficult to defend
Most do. Even still, the address bar is not the only place a URL is seen. Links in text all over the internet has URLs - particularly when shared in unformatted text (ie not anchor tags). And URLs should be readable to some extent. Would you suggest that all pages might as well be unique IDs? A URL like:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Is much better than
https://developer.mozilla.org?articleId=10957348203758
> how about doing it using a # on the page, so at least everyone had a chance to know that it is extra
Fair enough - I think that's a fine idea.
Not all services allow you to change the title (and therefore mutate the slug) but situations where changing the title changes the slug are so infrequent (and in this case, consequences nearly so inconsequential) that this is a problem mostly in theory. It's a miniscule price to pay for semantically useful URLs.
https://meta.discourse.org/t/deleted-topics-where-are-they/2...
/t/ for topic, slug for readability, then a topic id and at last a reply id.
library.com/books/1as03jf08e/Moby-Dick/
library.com/books/1as03jf08e/Hitchhikers-Guide-to-the-Galaxy
Now lead to the same place...
You can take off any of the words past the numeric ID and it still works just fine.
For example.
router.get('/article/:article_shortid*?',function(req,res){ });
catches /article/28424824/this-is-my-article, and also /article/28424824
This problem about naming URLs is also present in file system design. File names can be short, meaningful, context-sensitive, and human-friendly; or they can be long, unique, and permanent. For example, a photo might be named IMG_1234.jpg or Mountain.jpg, or it can be named 63f8d706e07a308964e3399d9fbf8774d37493e787218ac055a572dfeed49bbe.jpg. The problem with the short names is that they can easily collide, and often change at the whim of the user. The article highlights the difference between the identity of an object (the permanent long name) versus searching for an object (the human-friendly path, which could return different results each time).
For decades, the core assumption in file system design is to provide hierarchical paths that refer to mutable files. A number of alternative systems have sprouted which upend this assumption - by having all files be immutable, addressed by hash, and searchable through other mechanisms. Examples include Git version control, BitTorrent, IPFS, Camlistore, and my own unnamed proposal: https://www.nayuki.io/page/designing-a-better-nonhierarchica... . (Previous discussion: https://news.ycombinator.com/item?id=14537650 )
Personally, I think immutable files present a fascinating opportunity for exploration, because they make it possible to create stable metadata. In a mutable hierarchical file system, metadata (such as photo tags or song titles) can be stored either within the file itself, or in a separate file that points to the main file. But "pointers" in the form of hard links or symlinks are brittle, hence storing metadata as a separate file is perilous. Moreover, the main file can be overwritten with completely different data, and the metadata can become out of date. By contrast, if the metadata points to the main data by hash, then the reference is unambiguous, and the metadata can never accidentally point to the "wrong" file in the future.
Rather, everything would automatically be ingested, collated, categorized, and (of course) searchable by a wide range of metadata. Much of it would be automatic, but it would also support hand-tagging files with custom metadata, like project or event names, and custom "categorizers" for more specialized file types.
Depending on the types of files, you could imagine rich views on top -- like photos getting their own part of the system with time-series exploration tools, geolocation, and person-tagging with face recognition, or audio files being automatically surfaced in a media library, with heuristics used to classify by artist, genre, etc. But these views would be fundamentally separate from the underlying data, and any mutations would be stored as new versions on top of underlying, immutable files, making it easy to move things between views or upgrade the higher level software that depended on views.
This was years ago, and I never got around to doing any of that (it would've been a massive project that likely would've fallen flat on its face). And now, in a roundabout kind of way, we've ended up with cloud-based systems that accomplish a lot of what I had imagined. I'd go so far as to say that local filesystems are quickly becoming obsolete for the average computer-user, especially those who are primarily on phones and tablets. It's a lot more distributed across 3rd party services than what I had in mind, but that at least makes it "safer" from being lost all at once (despite numerous privacy concerns).
A new user profile will come with a prominent "All my files" live search shortcut that just shows all your files in a jumble sorted by when you last used them. Then they expect you to search and filter through them by metadata (which is automatically extracted/indexed by Spotlight). Then you can save these searches/filters as saved searches which are live-updating virtual folders.
Photos and videos are managed entirely in the photos app, and organised almost exactly according to your suggested categories(literally called memories (for events), places, people). iTunes handles audio files automatically(you can sync your own files into apple music, where they're categorised in the same way as any other music).
As I understand it, APFS also handles copying and modifying in a similar way to your description, where a copy of a file is treated as a mutation of the previous version.
Everything is even synced through iCloud to all your devices, with all macOS devices keeping a rather complete copy, unless they run out of disk space.
This would require someone to have their first experience of computing in the modern Apple ecosystem(literally iOS 11 and up) to avoid preconceptions about filesystems, since traditional folders are still supported, but it's possible.
Problem is, our existing file I/O APIs are very much centered around the notion of mutable files, and globally shared state with no change isolation.
I didn't quite understand the point of the hierarchal "search URL" when you have the /search one implemented, and they go on to say you could implement both if you have the time and energy.
Natural keys, meaning entity identification by some unique combination of properties, are hard to get right (oops, your email address isn't unique, or it's a mailing list) and a pain to translate into a name (`where x = x' and y = y' and z = z'`, or `/x/x'/y/y'/z/z'`, etc.).
Surrogate keys, on the other hand, make it easy to identify one and only one object forever, but only so long as everybody uses the same key for the same thing.
And as mentioned in the article, the most appropriate is usually both. Often you don't have the surrogate key, so you need to look up by the natural key, but when you do have the surrogate key, it's fastest and most likely to be correct if you use that in your naming scheme.
There are only two hard things in Computer Science: cache invalidation and
naming things.
-- Phil Karlton
https://martinfowler.com/bliki/TwoHardThings.html"There are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors."
It's worth including the third saying on the page just for completeness:
"There are only two hard problems in distributed systems: 2. Exactly-once delivery 1. Guaranteed order of messages 2. Exactly-once delivery"
The article is largely based on a misguided premise: the idea that URLs should be conceptualized as either names or identifiers. URLs are neither: they are addresses of web pages. The things located at the URL may have names or identifiers, but by design of the web the stuff located at an address is mutable while the address is immutable.
This is an important point because it breaks the analogies to books or bank accounts. A physical copy of Moby Dick is a thing that may be located at a given address, or not. The work of fiction "Moby Dick" has an ISBN number, but the ISBN number is metadata, not an address. A bank account number is also metadata, not an address.
So I get the feeling that URLs should be conceptualized as addresses first and foremost. This isn't a magic bullet for the problem the blog post addresses (how to design URLs) but I think it gives some perspective:
* If the "thing" at the URL will always be conceptually the same "thing", but its name or other metadata may change, it makes sense to assign that thing a unique identifier and use this as part of the URL. (Because the thing with this ID will always be found at this address.)
* If the name of the stuff located at the URL is never going to change, it makes sense to use the name as part of the URL. (Because the stuff with this name will always be found there.)
* "Search results" as discussed in the blog post are a special case of the previous point: if a URL will always contain search results for a certain query, it makes sense to use the name of the query as part of the URL.
* There are also URLs that fall outside the name or identifier paradigms. http://www.ycombinator.com/about/ is the address of a bunch of stuff, which is not necessarily a single coherent thing with either an ID number or a name, but is a very reasonable address at which some content may be located.
Maybe this is all obvious, but to me it really helps think about the issue whereas the blog post confused some things for me, so I thought I'd share.
But an address is a designator/identifier.
You can give an object some metadata like "current address", but that's different from saying the address alone identifies the object.
The author appears to have forgotten about 3xx redirection codes which were intended to solve that very problem.
There's also the problem of aliasing; if another book by the same name is later added to the shelf, the hierarchical name now references an entirely different resource.
This is why we can't have nice things.
Redirecting to canonical URLs is canonicalization 101. https://support.google.com/webmasters/answer/139066?hl=en#4
Also, what would be an example of same-origin redirect abuse?
The small dip caused by 301s was even recently removed altogether.
Abstract
In many disciplines, data are highly decentralized across thousands of online databases (repositories, registries, and knowledgebases). Wringing value from such databases depends on the discipline of data science and on the humble bricks and mortar that make integration possible; identifiers are a core component of this integration infrastructure. Drawing on our experience and on work by other groups, we outline 10 lessons we have learned about the identifier qualities and best practices that facilitate large-scale data integration. Specifically, we propose actions that identifier practitioners (database providers) should take in the design, provision and reuse of identifiers. We also outline the important considerations for those referencing identifiers in various circumstances, including by authors and data generators. While the importance and relevance of each lesson will vary by context, there is a need for increased awareness about how to avoid and manage common identifier problems, especially those related to persistence and web-accessibility/resolvability. We focus strongly on web-based identifiers in the life sciences; however, the principles are broadly relevant to other disciplines.
claimer: I am one of the many authors.
I started changing my way of looking at identity by reading the rationale of clojure (https://clojure.org/about/state#_working_models_and_identity) -> "Identities are mental tools we use to superimpose continuity on a world which is constantly, functionally, creating new values of itself."
The timeless book "Data and reality" is also priceless: https://www.amazon.com/Data-Reality-Perspective-Perceiving-I....
More specifically concerning the article, I do agree with the point of view of the author distinguishing access by identifier and hierarchical compound name better represented as a search. On the id stuff, I find the amazon approach of using URN (in summary: a namespaced identifier) very appealing: http://philcalcado.com/2017/03/22/pattern_using_seudo-uris_w.... And of course, performance matters concerning IDs and UUID: https://tomharrisonjr.com/uuid-or-guid-as-primary-keys-be-ca....
Happy data modeling :)
EDIT: - add an excerpt from the clojure rationale
For example, we ingest gamertags and IDs from players of Xbox Live, PSN, Steam, Origin, Battle.net, etc. - each have their own requirements in terms of what is allowed in a username, and even whether or not they're unique. Often you can't ensure a user is unique by their gamertag alone. You can't even ensure uniqueness based on gamertag and platform name. Reality is that search is almost always required in these cases, and that's why we've implemented search in the way described in this article, with each result pointing to a GUID representing a gamer persona.
This also solves the technical† challenge of handling renaming, even within a single platform. (Steam, I hate you.)
† Another challenge is social, esp. regarding abuse.
Books in a library are seldom renamed, if ever. The named URL would be almost as permanent as the canonical URL.
However in their earlier example of a bank account, a personal account name is typically the account holder name and the type of account, and both of these could be subject to change as a result of marriage, death, or the change in products offered by a bank. Even then, the rate of change is low.
A better example that the author could have (should have?) used is that of a news website where the article title may change frequently and yet there is a desire to make the link indicate the type of content at the destination... this is the real crux of the issue.
On a news site a canonical identifier driven URL may be correct... but does not sell or communicate the story behind the link and the link is likely to be shared without context. Sure you may see `example.com/news/a49a9762-3790-4b4f-adbf-4577a35b1df7` but this could be any news... it is far less obvious what is behind the link than the banking example as diversity in news stories is huge.
Yet the named URL would likely fail too, as once created and shared it should not mutate or at least should remain working... and yet the story title is likely to be sub-edited multiple times as news evolves.
The best scheme was not even mentioned in the article... combining both an identifier with a vanity named part: `example.org/news/a49a9762-3790-4b4f-adbf-4577a35b1df7_choosing_between_names_identifiers_URLs` . The named part can vary as it is not actually used for lookup, only the prefix identifier is used for lookup.
Though that has it's own downside... one can conjure up misleading named sections for valid identifiers to misdirect and mislead.
What does this mean? Is it just to say don‘t use the name hierarchy but rather the permalink-key as identity in the database?
Those who do not understand UNIX are condemned to reinvent it, poorly. -- Henry Spencer
Hard links, symlinks and inodes./shelf/{something}
{something} could be a name - 'american literature' {something} could be an identifier - '20211fcf-0116-4217-9816-be11a4954344'
if someone calls:
https://library.com/locations: { "kind": "Shelf", "name": "20211fcf-0116-4217-9816-be11a4954344", }
now we have a shelf named with the id of a different shelf
and the meaning of
/shelf/20211fcf-0116-4217-9816-be11a4954344/book
is now ambiguous
i don't know a great way to avoid this
this is unambiguous, but i don't think my co-workers would like it: /shelf/name/{id}/books /shelf/id/{id}/books
I think this would only be slightly more popular
/shelf/name/{id}/books /shelf/{id}/books
because the thing after shelf/ would not consistently be an id
That way, you have the best of both worlds in all cases.
If another object tries to use the same URL as another object (which was used first), then a new URL must be generated (just add something at the end of the name).
Could you explain why?
https://react-native.canny.io/feature-requests/p/headless-js...
For example, a post with title "post title" will get url "post-title".
Then a second post with title "post title" will get url "post-title-1".
Since there's only one URL part associated with each post, it's a unique identifier.
This gets rid of the ugly id in the URL, for epic URL awesomeness.
Furthermore, if you edit the first post to have "new post title" then its URL will update to "new-post-title", but "post-title" will still redirect to "new-post-title".
Someday I'm gonna open source a lib that lets you easily add awesome URLs to your app. :)
Did you mean "slug"? What you are describing is a basic feature of most blogging software since the inception of blogs...
– Automatically handling duplicates
– Avoiding needing to include the unique ID in the URL
– Updating the URL after editing the post
– Redirecting previous versions to the new version
But the sheer arrogance of serving a webpage that doesn't render any text unless you execute their JavaScript really annoys me. It's not a fancy interactive web-app, it's a webpage with some text on it.
It’s not worth the time to appeal to such a minority share of internet users.
Humans using off the shelf browsers aren't the only ones who consume webpages.
That argument also doesn't address OP's complaint: regardless of whether everyone has JS and uses it, the page is only rendering text, why is JS even necessary? It's not a web app, it doesn't have any special functionality etc, it doesn't have any legitimate reason to use JS, but for whatever reason, we're forced to use it anyway.
Mandating JS to get any content, no matter how static, seems like the start of the death of e.g. Linked Data and a the web as an open standards based platform. I know I'm in the minority but diversity is a strength, and there are few places more important than the web.