Due to the rise of influencers, social media is barely a sharing platform anymore.. its just decentralized long-tailed broadcast media.
Many modern people think dining out and travel are hobbies, and in between doom scroll social media.
Time spent staring at the phone is rarely productive or anxiety reducing.
Did you notice that this entire blog is just an LLM content farm newsletter? That the laptop in the headline image has a double keyboard AI artifact that the author didn't even spend 10 seconds cropping out?
The recent posts hit all the common points in LLM hallucinated content like the famous "recursive protocol" trope. The posts are about BS like "UFO markets" and reality protocols.
It's ironic that people are consuming this obvious AI slop uncritically while criticizing other people for their uncritical consumption of media on their phones.
If that’s not gatekeeping I don’t know what is.
If someone is going to restaurants serving variations of a dish, or if they travel to cemeteries where their ancestors are buried, that's qualify as hobbies.
Whereas, if they go to all the trending restaurants or countries/cities on social networks, that's not hobbies.
In the first case, they make active decisions on what they do; in the second case, they are just following the decisions made by others.
To be clear, both are fines, as long as they feel happy with how they spend their free time and money.
But I know with who I'd like to spend some time, listening them explaining what they did in the last months.
Is shopping a hobby?
Hobbies to me are more about putting something out into the world even if just for yourself or family.
Cooking is more of a hobby than dining out.
Everything we have seen over the last few years (eg what microsoft is doing to Windows) points to a push to make the platforms we used to control, more like the 'consumption' platforms. Profit demands it.
"Does this serve my goals, or someone else's metrics?" indeed.
I did within the last year switch from Windows to Mac for my primary desktop, and it feels like I regressed about a decade in the dumbification of computing compared to where Windows was headed.
I invested in two wireless handheld keyboard+pointer inputs to match the different input styles of me and my wife.
Completely bypasses all ads with less effort than setting up a pihole or torrent+Plex server, and the bonus is avoiding the surveillance from the TV's 'consumption OS'
Look at the heading and sub-heading of a post from a couple weeks ago:
> Witnesses Carry Weights: How Reality Gets Computed
> From UFO counsel to neighborhood fear to market pricing—reality emerges through weighted witnessing. A field guide to the computational machinery where intent, energy, and expectations become causal forces.
It even gets into the "recursive protocol" trope that has become a common theme among people think ChatGPT is revealing secrets of the universe to them.
This type of LLM slop has been hitting the page more frequently lately. I assume it's from people upvoting the headline before reading the content.
"My high school teacher in 2004 accused me of plagiarizing from Wikipedia because my research paper looked "too polished" for something typed on a keyboard instead of handwritten. Twenty years later, HN commenters see clean prose and assume LLM slop. Same discomfort, different decade, identical pattern: people resist leverage they haven't internalized yet.
I use AI tools the way I used spell-check, grammar tools, and search engines before them—as cognitive leverage, not cognitive replacement. The ideas are mine. The arguments are mine. The cultural references, personal stories, and synthesis across domains—all mine. If the output reads coherently, maybe that says more about expectations than about authenticity.
You can call it slop. Or you can engage with the ideas. One takes effort. The other takes a glance at a header image and a decision that polish equals automation. Your choice reveals more about your relationship to technology than mine."
As someone who actually clicks the links and reads the articles, I’m growing frustrated with these AI-written articles wasting my time. The content is typical of ChatGPT style idea expansion where someone puts their “ideas” into an LLM and then has the LLM generate filler content to expand it into a blog post.
I try to bring awareness of the AI generated content so others can avoid wasting their time on it as well. Content like this also gets flagged away from the front page as visitors realize what it is.
Your edited admission of using AI only confirms the accusations.
But the problem with them is lack of accessible management. Like, so I can tell grandfather: if you want to be mobile and powerful, go buy this one precise box, match mobile with box and install programs from here. And programs could auto-detect personal server and make use of it. For instance, a browser may store a history on a personal server and index it. Currently any such feature are only possible via corporate-owned servers, and that is not going to change until widely accepted private server OS come.
As I imagine that, it should look like a store program on mobile. Like NashStore or AppGallery. But with requirement to have personal server and with some background service to help server and mobile parts of program find each other.
IMO the important thing to be mindful of is your creation-vs-consumption balance. We tend to overindex on consumption.
https://vonnik.substack.com/p/how-to-take-your-brain-back
i think it's underrated, too, how much the pairing of phone-camera to produce media amplifies the possibility of consumption via the same device.
An impression being created here that laptops are the best creation tools, and that users have the right to greater control on them. MacOS, iOS, Windows and Android are just extensions of each other. In a continually connected device ecosystem, there is a false perception of power and control in the writer's mind about using a Laptop.
I certainly think that laptops have better software and interfaces for some types of work. But, Capcut mobile is earier to use and more powerful in the hands of the 99.9% than any desktop editing tool.
What we must remember, is that where once the limitation to productivity was typing, or clicking, LLMs and AI assisted tasks are going to afford mobile users the power that was once only available to computer users. For example, who needs to edit chunks of code when bitrig or cursor mobile (early in their stages of company development) do the laborious work for you. The limitation of mobile devices is now only one of perception.
But they really loves multi-screens :) For me, multi-screens are a big waste, I find virtual screens for more useful. The only real use multi-screens have for me is debugging a program with some kind of user interface. And the 2nd screen only needs to be a text terminal.
But, I have not used Windows for decades, so I wonder if these multi-screen setups and popular due to how the Windows GUI work and are really needed.
I do admire people that can get it all done on a laptop.
Seriously, please stop it. If you talk about an abstract topic, feel free to have no picture, just text.
So much writing on the internet seems derivative nowadays (because it is, thanks to AI). I’d rather not read it though, it’s boring and feels like a samey waste of time. I do question how much of this is a feedback loop from people reading LLM text all the time, and subconsciously copying the structure and formatting in their own writing, but that’s probably way too optimistic.
Most of the examples used to justify creation vs consumption can also be explained by low scale vs high scale (cost sensitive at high scale) or portability.
Look at the titles of other posts:
> Memory Beaches and How Consciousness Hacks Time Through Frame Density
> Witnesses Carry Weights: How Reality Gets Computed
> From UFO counsel to neighborhood fear to market pricing—reality emerges through weighted witnessing. A field guide to the computational machinery where intent, energy, and expectations become causal forces.
The blog is supposedly about AI agents and MCP (the current top buzzwords)
> Engineer-philosopher exploring the infrastructure of digital consciousness. Writing about Model Context Protocol (MCP), Information Beings, and how AI agents are rewiring human experience. Former Meta messaging architect.
The entire blog is just an LLM powered newsletter play.
As the author, do you just don't see what ridiculous image the slop machine spewed out - a kind of visual dyslexia where you do not register problematic hallucinations?
I can go on for a while hypothesizing, and none of the reasons I can come up with warrant using obviously bad AI slop images.
Is it disdain for your users - they won't see it/they won't care/they don't deserve something put together with care? Is it a lack of self-respect? Do people just genuinely not care and think that an article must have visuals to support it, no matter how crappy?
The mind truly boggles.
Snark aside, I think it's laziness and the shotgun approach. The author writes some rough thoughts down, has an AI "polish" them and generate an image, and posts an article. Shares it on HN. Do it enough, especially on a slow Sunday morning, and you'll get some engagement despite the detractors like us in the comments. Eventually you've got some readers.
Frankly the right tool is sometimes the one you have in front of you.
But anyone who's seen disaster DIY videos or worse had a house full of said projects from previous owners knows, there are problems caused by "When all you have is a hammer..." And an enthusiastic inexperienced amateur.
I use AI tools the way I used spell-check, grammar tools, and search engines before them; as cognitive leverage, not cognitive replacement. The ideas are mine. The arguments are mine. The cultural references, personal stories, and synthesis across domains—all mine. If the output reads coherently, maybe that says more about expectations than about authenticity.
You can call it slop. Or you can engage with the ideas. One takes effort. The other takes a glance at a header image and a decision that polish equals automation. Your choice reveals more about your relationship to technology than mine :)
Apple seems to completely stuck with their macOS/iOS split, and probably will never do anything about it. Now iPadOS & macOS look and feel more similar than ever before, but it's all just facade. They should actually commit really hard on merging these OS, but they can't open up iOS, because that would threaten the 30% cut and so its simply not going to happen.