I'd still get far more value out of this:
I think you both stumbled upon a fundamental part of the discussion: the tension between finding a way to identify resources (or concepts, or physical things) in a unique and unambiguous fashion, and affordances provided by natural language that allow human minds to easily associate concepts and labels with the things they refer to.
The merit of UUID's, hashes or any other random string of symbols which falls outside of the domain of existing natural languages, is that doesn't carry any prior meaning until an authority within a bounded context associates that string with a resource by way of accepted convention. In a way, you're constructing a new conceptual reference framework of (a part of) the world.
The downside is that random strings of symbols don't map with widely understood concepts in natural language, making URL's that rely on them utterly incomprehensible unless you dereference them and match your observation of the dereferenced resource with what you know about the world (e.g. "Oh! http://0x5235.org/5aH55d actually points to a review of "Citizen Kane")
By using natural language when you construct a URL, you're inevitably incorporating prior meaning and significance into the URI. The problem is that you then end up with the murkiness of linguistics and semantics, and ends with all kinds of weird word plays if you let your mind roam entirely free about the labels in the URI proper.
For instance, there's the famous painting by René Margritte "The treachery of images" which correctly points out that the image is, in fact, not a pipe: it's a representation of a pipe. [1] By the same token, an alternate URI to this one [2] might read http://collections.lacma.org/ceci-nest-past-une-pipe, which incidentally correct as well: it's not a pipe, it's a URI pointing to a painting that represents a physical object - a pipe - with the phrase "this is not a pipe."
Another example would be that a generic machine doesn't know if http://www.imdb.com/titanic references the movie Titanic or the actualy cruiseship, unless it dereferences the URI, whereas we humans understand that it's the movie because we have a shared understanding that IMDB is a database about movies, not historic cruiseships. Of course, when you build a client that dereferences URI's from IMDB, you basically base your implementation on that assumption: that you're working with information about movies.
Incidentally, if you work with hashes and random strings, such as http://0x5235.org/5aH55d, you're client still has to be founded on a fundamental assumption that you're dereferencing URI's minted by a movie review database. Without context, a generic machine would perceive it as random string of characters which happens to be formatted as a URI, and dereferencing it just gives a random stream of characters that can't possibly be understood.
[1] https://en.wikipedia.org/wiki/The_Treachery_of_Images [2] https://collections.lacma.org/node/239578
It's an interesting topic. I agree with you that identifiers can be intended for humans or machines, and there's often different features to optimize for depending. URIs are the strange middle ground where they include pitfalls of having to account for both humans and machines.
In an interesting way, each individual website has to come up with its own system for communication. It may be a simple slug (/my-new-blog/), or it may be an ID system (?post=3). It could be something else completely.
There is some value in offering that creativity, but a system where URIs are derived from content also makes a lot of sense to me. You mentioned a hash which I think is the right idea.
It seems reasonable enough that URIs could take inspiration from other technologies like git, or even (dare I say) blockchain. This leads naturally to built in support for archiving of older versions, as content is diffed between versions.
There's some fun problems to think about like how to optimize the payload for faster connections, then generate reverse diffs for visiting previous versions. Or if browsers should assume you always want the newest version of the page, and automatically fetch that instead.
This solves some problems, and creates many others. Interesting thought experiment anyway.