* Beaker (https://beakerbrowser.com) the peer-to-peer browser just released their beta release - and it has some exciting features. Particularly the built-in editor, meaning you can edit, serve and read your pages all from the browser. (Blogging in Beaker is as simple as visiting: hyper://a8e9bd0f4df60ed5246a1b1f53d51a1feaeb1315266f769ac218436f12fda830/. And the posts are stored locally.)
* https://special.fish/ This completely low-tech social network has taken off. The innovation here is that it's all just focused on profile pages - not feeds of random posts.
* There's a growing subculture of public Tiddlywikis (philosopher.life, sphygm.us, etc) - rather than focusing on protocols and APIs, they are much more focused on how to organize and style personal hypertext.
* As for RSS, well, as HN custom insists, I am also commenting to plug my own fraidyc.at. See, you knew it was here.
* On a related note, I've also been working on an RSS/Atom extension to handle ephemeral posts: live streams, "stories", pinned posts, etc. https://github.com/kickscondor/fraidycat/wiki/RSS-Atom-Exten...
* There's also a forum on tiny personal link directories that's been forming at https://forum.indieseek.xyz. The idea here is to use Yahoo! or DMOZ style link directories at a smaller scale, to catalog corners of the Web. (Note that this whole comment itself is a kind of small 'directory'. Rather than an algorithm stepping in to show you 'related' stuff, I have.)
Both of your projects will serve as inspiration for myself. Your personal site especially as it feels creative and liberated, things I strive for in my personal life.
Thank you.
Fraidyc.at is very cool though - I remember seeing that a while back and think I'll likely end up using it at some point when I have some time to play with it.
Also, Fraidyc.at works with Tiddlywikis.
* https://special.fish/ This completely low-tech social network has taken off.
My god i haven’t been so drawn into a website in a long time. I actually forgot what that feeling was like.I just tried downloading the Windows executable. Initially it said it will take 1.5 hours, but then the download just failed. Tried a few more times with the same result. This is ironic on so many levels.
Title: The Unicorns Fell Into a Ditch
Link: bloomberg article
In this example the title is pretty much meaningless so I need DF's take on the topic before I make a decision to go off and visit Bloomberg. My existing RSS feed provides DF's blog content (text).
So I just wanted to understand whether I'm missing something here, or in fact FC is doing exactly what you designed it to do; title and link to 3rd party page only. Pretty much like how Hacker News (HN) operates?
Prehaps it's just the way I personally navigate. I never click on HN titles to go to 3rd party links. I always hit the comments first. The top comments usually give me a good overview before I decide to vist the 3rd party page.
Probably FC isn't intended for me, but I just wanted to sure I'm not missing something, because I'm genuinely excited to use. Regardless, many thanks for making FC.
P.S. I love the retro look of you personal site and FC's home page.
Also, there is a redesign underway that will give richer post view: https://github.com/kickscondor/fraidycat/issues/30
I used a special character (&) in my password. I removed the ampersand and was able to reset password, and log back in.
Sorry if this isn't the correct channel for bugs (feel free to redirect me).
Cheers.
[1] If anyone wants to know more about it, see https://en.wikipedia.org/wiki/Net.art
> commenting to plug my own fraidyc.at. See, you knew it was
> here.
This is quite a nice implementation and the back story is cool.
I have also been thinking about this for some time, but what I want to achieve is decentralized RSS feeds. Servers go down over time, are blocked or suffer some other issue. I want to be able to help keep alive the content I consume, possibly without the need for centralization.
Ideally the aim would be to maintain backwards compatibility and be as decentralized as possible, but I'm still yet to solve this problem without resulting to just using torrents.
Of course, ideally, the websites themselves would publish articles on the hypercore network, and be the source of trust here, but I don't see why you couldn't do it from RSS feeds.
I am not sure whether the data is content-addressed on hypercore? If so, multiple people could archive the same content to make sure it is identical, and still enjoy the benefits of distributed hosting :)
More about hyperdrive/hypercore: https://news.ycombinator.com/item?id=23180572
Freenet is a p2p network with a focus on anonymity and resistance against censorship. It's not a browser, but for freesites it's got protections in place to filter out dangerous active content.
The new beta has been quite depressing for me, as all my personal apps stopped working due to their complete rewrite of their API. I don't know if I'll ever be willing to start it all over again.
But for anyone who hasn't already played with it, I totally recommend it. It feels like the web we should have had.
EDIT: for anybody curious, we had to make breaking changes to the p2p protocol and used that as a chance to bundle a lot of improvements -- mainly with performance and reliability. It sucked to break existing content though.
Here's my first commit: https://github.com/kickscondor/duxtape/commit/55dbde9519aedb.... Can't really call it a rewrite - more of a search-and-replace. (Though I had a bit of code that used the old peer sockets - and that code works differently now.)
Thanks for the heads-up! There are so many links to follow up on here in this thread.
Thank you so much for all the hard work.
I downloaded and am playing with fraidyc.at and am so far really liking the idea -- the whole idea made more sense after watching your video and then actually testing it. Thanks for making it.
It never fails to amaze me how much amazing stuff is out there online, hidden by a thick layer of top search results, and even more than that, the sheer amount of individual and collective effort that has been put into each of these sites. Someone mentioned the word "niche" and there is certainly some weirdly (or wonderfully) specific content you will find in the Wiby.me index. Lots of sites that haven't been updated since 1998, but still have an enormous and encyclopedic list of everything related to some topic (like the characteristics of different types of tomatoes, or how to build a motorcycle from spare parts or whatever). Some of it may be a little out of date, but a lot of it has been submitted for indexing precisely because of its timelessness or continued usefulness.
Whenever I feel hopeless about the current state of the web, I find this is the perfect antidote!
If you're unfamiliar with the concept, a webring was a simple circular linked list. You had a link on your knitting-themed site to the "next knitting-themed site", that site had a link to the next one, etc.
To join the ring, you just emailed someone and said "hey, I, too, have a knitting-themed site, can you add me to your webring?", they looked at your site, and changed their link to your site, you added the link they previously had, and the ring continued.
I want to build something simple that'll serve a small widget with previous/next/random site buttons, it'll work like the webrings of old regarding the curation aspect, so to get added you'll need to be referred to by someone.
Would you use something like that? You'd basically just drop a bit of HTML on your page and it wouldn't load heavy JS/analytics/crap, just whatever was necessary to paint a few links.
Hotline Webring: https://hotlinewebring.club
XXIIVV Webring: https://webring.xxiivv.com
Weird Wide Webring: https://weirdwidewebring.net
I'm personally more into personal directories and blogrolls than these random clicks - but they still seem to be a good way to put together a small community. (And this post isn't meant to discourage you - but rather to encourage you to form your own.)
You add the webring to your site, allowing your visitors to discover more sites you specifically like, then you can opt-in to have your RSS feed fed back to the main site.
If I continue with this thought exercise, a lot of the big indoor shopping malls around me have been knocked down and replaced with standalone outdoor stores (walled gardens?).
I'm not sure where things are going next.
1) The shift from spammy shit content as something to squash to something to allow and even promote over better content, provided it follows certain (Google's) rules. This shift (in terms of Google's behavior) happened ~2008-2010 and we haven't seen a period of spammy crap content getting heavily downranked since then, like we used to when they were still trying to stay ahead of it rather than give it a "legitimate" avenue as a method of control. Google's still being the most important search provider to appease has left the rest unable to direct behavior toward anything better than what does well on Google, so their results aren't much better.
2) A move away from actual or de facto open systems & protocols to deliberately carved up communities. The only thing keeping chat, Twitter-like services, and other social media—hell, even Youtube, so far as some kind of format for hosting video with metadata—from being standards or protocols is that business incentives reward "owning" a userbase (so you can better spy on them, and to keep anyone from providing a better, perhaps less-spying-laden client and "stealing" ad-viewing eyeballs)
Both of these are fundamentally problems of the spyvertising economy taking over the Web and I think a lot of the issues would go away if we could (legally—I don't think tech will do it) permanently and completely break that. More specifically a big part of the problem is Google, though of course the rest of the Web giants are gleefully following similar bad incentives.
One development that would be good for small web sites to look into is schema.org linked-data formats. Those might simply be too effort-intensive for the spammers to adopt (at a high level of detail) and perhaps too much of a commitment to quality and transparency (they would have to actively forge the info, which would leave them open to bans given the lack of plausible deniability), so they might become a viable signal of quality and lead to higher visibility in SERP.
(Similar for things like proper separation of style from content, that have always been advocated for in the web-standards community but are not really commercially viable.)
I'm not quite sure if others have experimented with this stuff already, but it seems worth trying.
However, I feel like it's moved more in the direction of being like broadcast television: lots of content that's designed to be consumed once and then forgotten. Maybe the television analogy oversimplifies the matter. Still, I think the more that content creators view their content as going into a permanent library, the better the quality.
A lot of web traffic today transits through applications and platforms. This isn't necessarily a bad thing; although I'd hate to see even more walled gardens. My hope is only that the small and independent web not go forgotten/ignored.
https://twitter.com/earthboundkid/status/1095385048008798208
It's a search engine that removes the top million domains from your search results (or top 100,000 or 10,000, etc). I find it useful sometimes to discover things on more obscure sites.
I created a subreddit recently to help with the discovery problem and posted it on HN earlier this week.
Show HN: https://news.ycombinator.com/item?id=23287286
Subreddit: https://www.reddit.com/r/hnblogs/
It’s been going well so far, only a small solution to a big problem, but it’s been fun to discover a lot of interesting blogs from people in this community.
If you missed the initial post feel free to join and add yours too.
I'm a fan.
[Edit]: I also caught a tiny typo:
> "But the web is not always "profit-oriented" and it certainly does not need be "user-centric" (and I say this as a UX consultant)."
I think you wanted "does not need to be"
I created a PR for you: https://github.com/parimalsatyal/neu/pull/2
There are 2 major factors that will power a resurgence, that could use better tools:
1. Discoverability - self-reinforcing webrings, blogrolls, directories, ad free-search etc.
2. Creation - next-gen Dreamweaver. A low-code site creator app that exports a static website as a folder of readable CSS/HTML that people can tinker with by hand (and learn), instead of being locked into one of these cloud WYSIWYG site generators. Hosting is solved. No need to tie the one to the other...
Some of the "old school" web style clashes with my aesthetic sensibilities these days -- a lot of words to say I find it somewhat ugly! -- but I miss its hobbyist, non-commercial aspect. A lot of hobby style content I find interesting has moved to YouTube or Facebook these days, and everyone who's been reading HN is aware of the lack of control authors have over those platforms...
I found myself nodding in agreement to a lot of what the author was saying.
I don't miss the "geocities look" though!
But I agree with you that the thing that I miss the most is the hobby-ist, non-commercial aspect. But I'm discovering a lot of great links on this HN thread!
There has always been a place for commerce and marketing on the web."
Not really true as I remember it. The web opened up to the public in 1993. There was no commerce and marketing in the beginning. Even by 1996 while commerce and marketing may have existed, e.g., Amazon founded in 1995, its place was in the background. As I rememember the early web, the foreground, the "starting point" or "portal", was something like Yahoo! You had to pick a topic (direction) that you wanted to go in. For example, if you were after music, you might end up browsing the Internet Underground Music Archive. The "front page" of the portal was predominantly non-commercial, mostly generic headings for topics. If you wanted to search out something commercial, no doubt you could but the initial starting point was intellectual curiosity. This is IMO what has been lost over time with regard to web use: intellectual curiosity and the ability to actually satisfy it. (A fun tangent here is the collections of inane queries that people type into Google. These are simultaneously hilarious and disturbing.)
As an experiment have a look at the Yahoo! page today. It is full of low quality mainstream "news". There is zero attention to intellectual curiosity. Nothing to see here, folks, but here is the latest news. For part 2 of the experiment, run a Google search for the term "music". The results are dominated by YouTube. Every result is directly or indirectly commercial (either selling something or conducting surveillance and serving ads), except one: Wikipedia. The chances of someone new to the web not following a link to YouTube or some other Google-controlled domain would seem almost nil.
The "onboarding" process for new web users is very different today than it was in the early 1990's. Perhaps it is still possible to approach the web with a sense of awe and wonder, pondering "What is out there?" However a new web user is scant likely to end up on a non-commercial website besides Wikipedia. What is out there? Surveillance, ads and an endless supply of soon-to-be-obsolete Javascript du jour.
I was fortunate to get into bicycle touring at a time when standalone blogs were easy to find, some of them were masterfully written, and there was not yet the mercenary desire to monetize one’s content and become an "influencer". The web of 2020 is very different, and because Google deranks older content, a lot of newbies in the hobby today won't even become aware of the former state of affairs even if much of that content remains just as relevant today.
i almost want to give myself nothing but wikipedia, api docs, and “small web” and pubnix. i'm not sure that i can give up hn or its ilk (but the ratio of interesting content to poor content is terrible)
You can perhaps try a week of _mostly_ limiting yourself to those and see how it goes? I personally do a mix of the "normal" web and more niche stuff out there.
Opera Mini is one awesome browser for such connections. Average page sizes were around __20 KiB__ (They use some compression proxy). No Javascript was loaded. (Simple JS tasks were delegated to server side. I guess with some optimizations they could save on those full page reloads, but maybe it was computationally expensive for those phones).
There was a vibrant ecosystem for those phones. Till this day, those websites that host pirated music from Indian movies work with simplest of WWW browsers.
That's nostalgia. I remember a Modded version of opera mini which had tonnes of other features that worked on those phones with 4-8 MiB of memory (I don't know exact specs though, they were not mentioned for those phones).
Even today, with my paranoid no-js no-web-fonts browsing (UBlock origin), there are parts of web that are efficient. Hacker news or i.reddit.com, for example.
No bullshit, just focused on the text and reading.
Hoping that more authors take this route when releasing stuff for the web.
> tilde.club is not a social network it is one tiny totally standard unix computer that people respectfully use together in their shared quest to build awesome web pages
And the story behind it: https://medium.com/message/tilde-club-i-had-a-couple-drinks-...
I'm too burnt out on web "best practices" to care about that for my personal sites anymore.
If you're curious: https://benovermyer.com
Kinda makes me want to make a more detailed about me page. If I enjoy these kinds of details, surely some other people will too. Might do it this weekend.
Thanks for sharing!
> It isn’t a particularly sophisticated way to show emotions or manifest an attitude, but still so much more interesting and expressive than what is available now:
> First of all, because it is an expression of a dislike, when today there is only an opportunity to like.
> Second, the statement lays outside of any scale or dualism: the dislike is not the opposite of a like.
> Third: it is not a button or function, it works only in combination with another graphic or word. Such a graphic needed to be made or found and collected, then placed in the right context on the page—all done manually.
> I am mainly interested in early web amateurs because I strongly believe that the web in that state was the culmination of the Digital Revolution.
I have non-tech-savvy family who has started content web sites over 20 years ago, by writing some basic HTML, and uses FTP to upload HTML files. It's a mental model that's easily understood by someone who can operate Windows. He went on to maintain the site for the next 20 years and it worked fine.
In the last couple of years I did a big migration for his site and moved it to a markdown based CMS (PicoCMS in PHP) and he's been happy with it -- having a web editor (and learning markdown, which was easy) and not have to FTP.
The thing is, it took work to set all that up (on my end, that he doesn't see). I got a Digital Ocean server, installed a bunch of stuff around PHP, wrote some custom plugins for the CMS, etc.
After having done that 3-4 years ago, I realized the more modern, ideal, alternative is to have a Git repo of markdown files, and a Netlify setup (or another similar service to Netlify) where check-ins are automatically deployed.
The problem then is this -- Git workflows are way, way too difficult for non-tech folks to understand. We're not even talking about command line or desktop git clients; even asking someone to use Github (or Gitlab) to edit markdown files to update their site is not an easy mental model to wrap around (if you're not a coder).
I think the most ideal setup, this "better Wordpress" you mention, would be to have a web UI to edit markdown files, backed by a git repo, hooked up with a Netlify-like service. I thought about working on that as a project; but it would be one that relies on using Github and Netlify as key pieces and I'm not even sure Netlify allows a 3rd party to develop apps that end up creating Netlify sites on behalf of other customers, which means I'd have to build out the full Netlify deploy flow and I'm really not in the business of doing that.
[1]: https://forestry.io
An opportunity, perhaps...
This is anecdotal, and I'm not making a "then vs. now" argument - but there was something about the exchange that I found reminicent of this 'old web', and totally devoid from the way people communicate nowadays. I can't put my finger on it - but on what social platform where would this exchange live now?
I've recently been tasked with building a website for a small organic food distributor in Oregon. I don't think the traditional image heavy "commercial web" fits well with their company culture and image but am struggling to find examples of "small web" commercial websites to show them as examples of a different way.
It would be nice to see another post discussing how we might bring the small web to the commercial one.
Is there a Web search engine that only indexes non-commercial content?