Connecting... Waiting... It was slow, both because of dial-up kbit/s and ping to websites, and every page felt like you were literally sending a request to another part of the planet. It felt like that was actually happening, and it was very different from what we experience now.
But most importantly, there were zero funds/VC in that Internet. Only very niche websites, zero online services, even email was difficult to obtain and felt like a real privilege. Only the fact of being connected made everyone feel not a stranger.
I kind of miss that Internet, but I'm grateful that once I was part of it.
No ads, no random tits, nobody trying to convert you to their politics, trying to scam you, or telling you to kill yourself. Just people sharing interesting things.
Really makes me excited for the internet until I close the tab.
[1] http://line-mode.cern.ch/www/hypertext/WWW/TheProject.html
My brain even ascribed a CRT distortion effect to it, even though that's not actually happening.
edit: okay, no, I am an idiot. Those pages were made in 2013:
Edit: Answered my own question I think. If you choose the option to browse "using the line-mode browser simulator", you can literally type in "Back" to go back.
So far, I like this line-mode browser simulator much more than what is commonly available for the command line (lynx or links2). Does any one know of a modern implementation of it? (Where links are numbered instead of the user having to navigate around the document).
Navigation was moving a cursor around to highlight points of interest, some of which would be links to further stuff or controls to do something like go back or forwards.
Install lynx or links2 (ie text mode browsers) and you'll get the idea.
The vaguely graphic efforts with browsable content that you might recognise before www were the likes of Compuserve. That got you a sort of forum style interface.
It's quite hard to explain just how fast things have moved over the last 40 odd years (I'm 1970 to date - 55). I should also point out that my granddad saw rather a lot of change from 1901 to 1989. To be honest the last 15 odd years are even madder than the previous 25 and that's just my own personal recollection.
You didn’t have cookie banners because you didn’t have cookies, because there is no need for most websites to have cookies.
Would love to see the source for the original httpd.
Though you can browse and download the latest version 3.0A (1996), there is a directory where they have older versions, but its a bunch of files mixed up with different versions. https://www.w3.org/Daemon/old/
New days of wonder seem to be ahead, though. That said, there's about 100X more angst involved these days.
I hope someone will write a "skin"/theme for Ladybird (whose August alpha release we are keenly awaiting) that looks like that before I'll have to do it myself...
Ted Nelson's dream since early `60s: all the world literature in one publicly accessible global online system (analogy: you can today get a telephone link from anywhere to anywhere, so why not from any text to any other?). Every reference to a text will lead to royalties being paid automatically to the author. Autodesk, (the makers of AutoCAD) will produce a product "real soon now". Includes the use of full versioning (claimed to be horrifyingly complex), "hot links" (called transclusions) and zippered texts (eg. parallel texts like for translations or annotations.)
Ofc these pages cannot replace SPAs. That's not the point. The point is: Much of the web isn't SPAs. And much of what is SPAs shouldn't be SPAs. Much of the web is displaying static, or semi-static information. Hell, much of the web is still text.
But somehow, the world accepted that displaying 4KB of text somehow has to require transmitting 32MiB of data, much of it arbitrary code that has no earthly business eating my CPU cycles, as the new normal. Somehow everyone accepts that text-only informational pages need to abuse the scroll-event, or display giant hero-banners. Somehow, having a chatbot-popup on a restaurants menu-page is a must (because ofc I wanna talk to some fuckin LLM wrapper about the fries they sell!!!), but a goddamn page denoting the places address and telephone number is nowhere to be found.
https://idlewords.com/talks/website_obesity.htm
This talk was given over a decade ago, and its takeaways are as relevant today as thy were back then, and in fact maybe even more so.
Everyone did accept that because when you needed information from a page that pulls that shit, you don't have a choice, and when you did have a choice, all the others did it too.
Nowadays people just ask ChatGPT for the information they need so they don't have to visit those awful sites anymore.
Some examples:
We now have to accommodate all types of user agents, and we do that very well.
We now have complex navigation menus that cannot be accessible without JavaScript, and we do that very well.
Our image elements can now have lots of attributes that add a bit of weight but improve the experience a lot.
Etc.
Also, things are improving/self-correcting. I saw a listing the other day for senior dev with really good knowledge of the vanilla stuff. The company wants to cut down on the use of their FE framework of choice.
I cannot remember seeing listings like that in 2020 or 2021.
PS.
I did not mean this reply as a counterpoint.
What I meant to say is, even if we leave aside the SPAs that should not be SPAs, we see the problem in simple document pages too. We have been adding lots of stuff there too. Some is good but some is bad.
Simple websites don't even care about the UA.
> We now have complex navigation menus that cannot be accessible without JavaScript, and we do that very well.
Is there an actual menu which is more than a tree? Because a dir element that gets rendered by the UA into native menu controls would be just so much better.
https://info.cern.ch/hypertext/WWW/DesignIssues/Overview.htm...
> When (s)he has found an overview page which (s)he feels ought to refer to the new data, (s)he can ask the author of that document (who ought to have signed it with a link to his or her mail address) to put in a link.
> By the way, it would be easy in principle for a third party to run over these trees and make indexes of what they find. Its just that noone has done it as far as I know
Performance: 100 Accessibility: 86 Best Practices: 92 SEO: 90
Website about this project: https://first-website.web.cern.ch/
Some previous discussions:
6 months ago https://news.ycombinator.com/item?id=45125239
It's a sad fact that a large part of the web doesn't work without Javascript, a technology which enables privacy-invasive practices (and surveillance capitalism). It wasn't as bad when progressive enhancement was the norm.
CERN rebuilt the original browser from 1989 (2019)
What is DU?
to put it in terms of a simple example, you need several HTML pages before one of them can link to another, but so far that's just hypertext. then you need pages spread out across plural sites to be able to create a web.
I telnetted from my PC to a VAX, then to a X.25 PAD, then onto a Janet system, then to somewhere in the US and then to CERN. Eventually I'd get a menu with a link to the www. I'd then navigate the www with different keystrokes.
www was/is free form links to stuff instead of hierarchical menus. It was an evolution not a revolution and there is no need to invoke "chicken or egg".