divisiveness this kind of stuff will create
I'm pretty sure we're already decades in to the world of "has created".Everyone I know has strong opinions on every little thing, based exclusively their emotional reactions and feed consumption. Basically no one has the requisite expertise commensurate with their conviction, but being informed is not required to be opinionated or exasperated.
And who can blame them (us). It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite. And each little snippet worms its way into your brain (and well being) one way or the other.
It's just been too much for too long and you can tell.
Customer asked if reporting these kinds of illegal ads would be the best course. Nope, not by a long shot. As long as Google gets its money, they will not care. Ads have become a cancer of the internet.
Maybe i should setup a Pi-Hole business...
On the actual open decentralized internet, which still exists, mastodon, IRC, matrix... bots are rare.
Which will eventually get worked around and can easily be masked by just having a backing track.
We truly live in wonderful times!
Answer? Probably "of course not"
They're too busy demonetizing videos, aggressively copyright striking things, or promoting Shorts, presumably
"Great question! No, we have always been at war with Eurasia. Can I help with anything else?"
Then the comments are all usually not critical of the image but to portray the people supporting the [fake] image as being in a cult. It's wild!
As others have noted, it’s a long-term trend - agree that as you note it’ll get worse. The Russian psy-ops campaigns from the Internet Research Agency during Trump #1 campaign being a notable entry, where for example they set up both fake far-left and far-right protest events on FB and used these as engagement bait on the right/left. (I’m sure the US is doing the same/worse to their adversaries too.)
Whatever fraction bots play overall, it has to be way higher for political content given the power dynamics.
And yes I know the argument about Youtube being a platform it can be used for good and bad. But Google control and create the algorithm and what is pushed to people. Make it a dumb video hosting site like it used to be and I'll buy the "bad and good" angle.
I know this was a throwaway parenthetical, but I agree 100%. I don't know when the meaning of "social media" went from "internet based medium for socializing with people you know IRL" to a catchall for any online forum like reddit, but one result of this semantic shift is that it takes attention away from the fact that the former type is all but obliterated now.
Discord is the 9,000lb gorilla of this form of social media, and it's actually quietly one of the largest social platforms on the internet. There's clearly a desire for these kinds of spaces, and Discord seems to be filling it.
While it stinks that it is controlled by one big company, it's quite nice that its communities are invite-only by default and largely moderated by actual flesh-and-blood users. There's no single public shared social space, which means there's no one shared social feed to get hooked on.
Pretty much all of my former IRC/Forum buddies have migrated to Discord, and when the site goes south (not if, it's going to go public eventually, we all know how this story plays out), we expect that we'll be using an alternative that is shaped very much like it, such as Matrix.
"Social media" never meant that. We've forgotten already, but the original term was "social network" and the way sites worked back then is that everyone was contributing more or less original content. It would then be shared automatically to your network of friends. It was like texting but automatically broadcast to your contact list.
Then Facebook and others pivoted towards "resharing" content and it became less "what are my friends doing" and more "I want to watch random media" and your friends sharing it just became an input into the popularity algorithm. At that point, it became "social media".
HN is neither since there's no way to friend people or broadcast comments. It's just a forum where most threads are links, like Reddit.
5 minutes after the first social network became famous. It never really has been just about knowing people IRL, that was only in the beginning, until people started connecting with everyone and their mother.
Now it's about people and them connecting and socializing. If there are persons, then it's social. HN has profiles where you can "follow" people, thus, it's social on a minimal level. Though, we could dispute whether it's just media or a mature network. Because there obviously are notable differences in terms of social-related features between HN or Facebook.
"Social Media" had become a euphemism for 'scrolling entertainment, ragebait and cats' and has nothing to do 'being social'. There is NO difference between modern reddit and facebook in that sense. (Less than 5% of users are on old.reddit, the majority is subject to the algorithm.)
[1] Those "crappy websites" with a maze of iframes are actually considered surprisingly refreshing today.
everybody is either trying to promote some product, or promote themselves as a "content creator" so they can start getting influencer marketing deals and payouts from youtube or instagram ad splits.
the internet was at it's best when most of the content on it was just because somebody had some information, and wanted to share it. that intent is still out there today, but unfortunately it's harder and harder to find buried amongst all the revenue generation.
Also on the phrase “you’re absolute right”, it’s definitely a phrase my friends and I use a lot, albeit in a sorta of sarcastic manner when one of us says something which is obvious but, nonetheless, we use it. We also tend to use “Well, you’re not wrong” again in a sarcastic manner for something which is obvious.
And, no, we’re not from non English speaking countries (some of our parents are), we all grew up in the UK.
Just thought I’d add that in there as it’s a bit extreme to see an em dash instantly jump to “must be written by AI”
If you have the Compose key [1] enabled on your computer, the keyboard sequence is pretty easy: `Compose - - -` (and for en dash, it's `Compose - - .`). Those two are probably my most-used Compose combos.
I never saw em-dashes—the longer version with no space—outside of published books and now AI.
According to what I know, the correct way to use em-dash is to not surround it by spaces, so words look connected like--this. And indeed, when I started to use em-dashes in my blog(s), that's how I did it. But I found it rather ugly, so I started to put spaces around it. And there were periods where I stopped using em-dash all together.
I guess what I'm trying to say is that unless you write as a profession, most people are inconsistent. Sometimes, I use em-dashes. Sometimes I don't. In some cases I capitalize my words where needed, and sometimes not, depending on how in a hurry I am, or whether I type from a phone (which does a lot of heaving lifting for me).
If you see someone who consistently uses the "proper" grammar in every single post on the internet, it might be a sign that they use AI.
It's still frequently identifiable in (current-generation) LLM text by the glossy superficiality that comes along with these usages. For example, in "It's not just X, it's Y", when a human does this it will be because Y materially adds something that's not captured by X, but in LLM output X and Y tend to be very close in meaning, maybe different in intensity, such that saying them both really adds nothing. Or when I use "You're absolutely right" I'll clarify what they are right about, whereas for the LLM it's just an empty affirmation.
For the past 15 years, I’ve used the Unicycle Vim plugin¹ which makes it very easy to add proper typographic quotes and dashes in Insert mode. As something of a typography nerd, I’ve extended it to include other Unicode characters, e.g., prime and double-prime characters to represent minutes and seconds.
At the same time, I’ve always used a Firefox extension that launches GVim when editing a text box; currently, I’m using Tridactyl for this purpose.
LLMs use em-dash because people (in their training data) used em-dash. They use "You're absolutely right" because that's a common human phrase. It's not "You write like an LLM", it's "The LLMs write kind of like you", and for good reasons, that's exactly what people been training them to do.
And yes, "pun" intended for extra effect, that also comes from humans doing it.
Likewise. I used to copy/paste them when I couldn't figure out how to actually type them, lol. Or use the HTML char code `—` It sucks that good grammar now makes people assume you used AI.
You can read it yourself if you'd like: https://news.ycombinator.com/item?id=46589386
It was not just the em dashes and the "absolutely right!" It was everything together, including the robotic clarifying question at the end of their comments.
hyphens are so hideous that I can't stand them.
YouTube and others pay for clicks/views, so obviously you can maximize this by producing lots of mediocre content.
LinkedIn is a place to sell, either a service/product to companies or yourself to a future employer. Again, the incentive is to produce more content for less effort.
Even HN has the incentive of promoting people's startups.
Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.
The closest thing would probably be private friend groups, but that's probably already well-served by text messaging and in-person gatherings. Are there any other possibilities?
It's only recently, when I was considering to revive the old-school forum interaction, that I have realized that while I got the platforms for free, there were people behind them who paid for the hosting and the storage, and were responsible to moderate the content in order to not derail every discussion to low level accusation and name calling contest.
I can't imagine the amount of time, and tools, it takes to keep discussion forums free of trolls, more so nowadays, with LLMs.
Yes, it is possible. Like anything worth, it is not easy. I am a member of a small forum of around 20-25 active users for 20 years. We talk all kind of stuff, it was initially just IT-related, but we also touch motorcycles (at least 5 of us do or did ride, I used to go ride with a couple of them in the past), some social aspects, tend to avoid politics (too divisive) and religion (I think none is religious enough to debate). We were initially in the same country and some were meeting IRL from time to time, but now we are spread in many places around Europe (one in US), so the forum is what keeps us in contact. Even the ones in the same country, probably a minority these days, are spread too thin, but the forum is there.
If human interaction involves IRL, I met less that 10 forum members and I met frequently just 3 (2 on motorcycle trips, one worked for a few years in the same place as I), but that is not a metric that means much. It is the false sense of being close over internet while being geographically far, which works in a way but not really. For example my best friends all emigrated, most were childhood friends, communicating to them on the phone or Internet makes me never feel lonely, but seeing them every few years makes grows the distance between us. That is impacting human to human interaction, there is no way around it.
spot on. The number of times I've came across a poorly made video where half the comments are calling out its inaccuracies. In the end Youtube (or any other platform) and the creator get paid. Any kind of negative interaction with the video either counts as engagement or just means move on to the next whack-a-mole variant.
None of these big tech platforms that involve UGC were ever meant to scale. They are beyond accountable.
Yes, but its size must be limited by Dunbar's number[0]. This is the maximum size of a group of people where everyone can know everyone else on a personal basis. Beyond this, it becomes impossible to organically enforce social norms, and so abstractions like moderators and administrators and codes of conduct become necessary, and still fail to keep everyone on the same page.
Any community that ends up creating utility to its users, will attract automation, as someone tries to extract, or even destroy that utility.
A potential option could be figuring out community rules that ensure all content. including bot generated content, provides utility to users. Something like the rules on change my view, or r/AITA. Theres also tests being run to see if LLMs can identify or provide bridges across flamewars.
Elon Musk cops a lot for the degradation of twitter to people who care about that sort of thing, and he definitely plays a part there, but its the monetisation aspect that was the real tilt to all noise in a signal to noise ratio perspective
We've taken a version of the problem in the physical world to the digital world. It runs along the same lines of how high rents (commercial or residential) limit the diversity of people or commercial offering in a place simply because only a certain thing can work or be economically viable. People always want different mixes of things and offering but if the structure (in this case rent) only permits one type of thing then that's all you're going to get
1. prohibit all sorts of advertising, explicit and implicit, and actually ban users for it. The reason most people try to get big on SM is so they can land sponsorships outside of the app. But we'd still have the problem of telling whether something is sponsored or not.
2. no global feed, show users what their friends/followers are doing only. You can still have discovery through groups, directories, etc. But it would definitely be worse UX than what we currently have.
the idea being that you'd somewhat ensure the person is a human that _may well_ know what they're talking about e.g. `abluecloud from @meta.com`.
Kill the influencer, kill the creator. Its all bullshit.
Blogs can have ads, but blogs with RSS feeds are a safer bet as it's hard to monetize an RSS feed. Blogs are a great place to find people who are writing just because they want to write. As I see more AI slop on social media, I spend more time in my feed reader.
- OpenAI uses the C2PA standard [0] to add provenance metadata to images, which you can check [1]
- Gemini uses SynthId [2] and adds a watermark to the image. The watermark can be removed, but SynthId cannot as it is part of the image. SynthId is used to watermark text as well, and code is open-source [3]
[0] https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-...
[1] https://verify.contentauthenticity.org/
I know the metadata is probably easy to strip, maybe even accidentally, but their own promotional content not having it doesn't inspire confidence.
That's not quite right. SynthID is a digital watermark, so it's hard to remove, while metadata can be easily removed.
AI content outnumbers Real content. We are not going to decide if every single thing is real or not. C2PA is about labeling the gold in a way the dirt can't fake. A Photo with it can be considered real and used in an encyclopedia or sent court without people doubting it.
Not only is it impossible to adjudicate or police, I feel like this will absolutely have a chilling effect on people wanting to share their projects. After all, who wants to deal with an internet mob demanding that you disprove a negative? That's not what anyone who works hard on a project imagines when they select Public on GitHub.
People are no more required to disclose their use of LLMs than they are to release their code... and if you like living in a world where people share their code, you should probably stop demanding that they submit to your arbitrary purity tests.
This is equal to projects where guys from high school took Ubuntu, changed the logo in couple of places, and then made statements that they'd make a new OS.
Anybody minimally competent can see the childish exaggeration in both cases.
The most logical request is to grow up and be transparent of what you did, and stop lying.
It honestly felt like being gaslighted. You see one thing, but they keep claiming you are wrong.
Now any photo can be faked, so the only photos to take are ones that you want yourself for memories.
Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API) | https://news.ycombinator.com/item?id=46661308
Commenter: > What % of the code is written by you and what % is written by ai
OP: > Good question!
>
> All the code, architecture, logic, and design in minikv were written by me, 100% by hand. I did use AI tools only for a small part of the documentation—specifically the README, LEARNING.md, and RAM_COMMUNITY.md files—to help structure the content and improve clarity. >
> But for all the source code (Rust), tests, and implementation, I wrote everything myself, reviewing and designing every part. >
> Let me know if you want details or want to look at a specific part of the code!
Oof. That is pretty damning.
———
It’s unfortunate that em-dashes have become a shibboleth for AI-generated text. I love em-dashes, and iPhones automatically turn a double dash ( -- ) into an em dash.
I've seen, 17 year ago, a schoolboy make "his own OS", which was simply Ubuntu with replaced logos. He got on TV with it, IIRC he was promoting it on the internets (forums back then), and kept insisting that this was his own work. He was bullied in response and in a few weeks disappeared from the nets.
What has it to do with me personally, if I'm not the author, nor a bully? Today I learned that I can't trust the new libraries posted in official repos. They can be just wrapper code slop. In 2012, Jack Diedrich in his speech "Stop Writing Classes" said that he'd read every library source code to find if there was anything stinky. I used to think it's a luxury of time and his qualification to read into what you use. Now it became a necessity, at least for new projects.
In the other hand, that we can't tell don't speak so good about AIs as speak so bad about most of our (at least online) interaction. How much of the (Thinking Fast and Slow) System 2 I'm putting in this words? How much is repeating and combining patterns giving a direction pretty much like a LLM does? In the end, that is what most of internet interactions are comprised of, done directly by humans, algorithms or other ways.
There are bits and pieces of exceptions to that rule, and maybe closer to the beginning, before widespread use, there was a bigger percentage, but today, in the big numbers the usage is not so different from what LLMs does.
1. Text is a very compressed / low information method of communication.
2. Text inherently has some “authority” and “validity”, because:
3. We’ve grown up to internalize that text is written by a human. Someone spend the effort to think and write down their thoughts, and probably put some effort into making sure what they said is not obviously incorrect.
Intimately this ties into LLMs on text being an easier problem to trick us into thinking that they are intelligent than an AI system in a physical robot that needs to speak and articulate physically. We give it the benefit of the doubt.
I’ve already had some odd phone calls recently where I have a really hard time distinguishing if I’m talking to a robot or a human…
To pass the Turing test the AI would have to be indistinguishable from a human to the person interrogating it in a back and forth conversation. Simply being fooled by some generated content does not count (if it did, this was passed decades ago).
No LLM/AI system today can pass the Turing test.
I cant do reddit anymore, it does my head in. Lemmy has been far more pleasant as there is still good posting etiquette.
For licensed professions, they have registries where you can look people up and confirm their status. The bot might need to carry out a somewhat involved fraud if they're checking.
Social Media is become the internet and/or vice-versa.
Also, I think you're objectively wrong in this statement:
"the actual function of this website is, which is to promote the views of a small in crowd"
Which I don't think was the actual function of (original) social media either.
Most people probably don't know, but I think on HN at least half of the users know how to do it.
It sucks to do this on Windows, but at least on Mac it's super easy and the shortcut makes perfect sense.
I will still sometimes use a pair of them for an abrupt appositive that stands out more than commas, as this seems to trigger people's AI radar less?
It lets users type all sorts of ‡s, (*´ڡ`●)s, 2026/01/19s, by name, on Windows, Mac, Linux, through pc101, standard dvorak, your custom qmk config, anywhere without much prior knowledge. All it takes is to have a little proto-AI that can range from floppy sizes to at most few hundred MBs in size, rewriting your input somewhere between the physical keyboard and text input API.
If I wanted em–dashes, I can do just that instantly – I'm on Windows and I don't know what are the key combinations. Doesn't matter. I say "emdash" and here be an em-dash. There should be the equivalent to this thing for everybody.
In X amount of time a significant majority of road traffic will be bots in the drivers seat (figuratively), and a majority of said traffic won't even have a human on-board. It will be deliveries of goods and food.
I look forward to the various security mechanisms required of this new paradigm (in the way that someone looks forward to the tightening spiral into dystopia).
Nah. That's assuming most cars today, with literal, not figurative, humans are delivering goods and food. But they're not: most cars during traffic hours and by very very very far are just delivering groceries-less people from point A to point B. In the morning: delivering human (usually by said human) to work. Delivering human to school. Delivering human back to home. Delivering human back from school.
What should we conclude from those two extraneous dashes....
They were sales people, and part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.
The other funny thing on EM dashes is there are a number of HN'ers that use them, and I've seen them called bots. But when you dig deep in their posts they've had EM dashes 10 years back... Unless they are way ahead of the game in LLMs, it's a safe bet they are human.
These phrases came from somewhere, and when you look at large enough populations you're going to find people that just naturally align with how LLMs also talk.
This said, when the number of people that talk like that become too high, then the statistical likelihood they are all human drops considerably.
There's a new one, "wired" I have "wired" this into X or " "wires" into y. Cortex does this and I have noticed it more and more recently.
It super sticks out because who the hell ever said that X part of the program wires into y?
It may grate but to me, it grates less than "correct" which is a major sign of arrogant "I decide what is right or wrong" and when I hear it, outside of a context where somebody is the arbiter or teacher, I switch off.
But you're absolutely wrong about youre absolutely right.
It's a bit hokey, but it's not a machine made signifier.
https://news.ycombinator.com/item?id=46674621 and https://news.ycombinator.com/item?id=46673930 are the top comments and that's about as good as HN gets.
To that end, I think people will work on increasingly elaborate methods of blocking AI scrapers and perhaps even search engine crawlers. To find these sites, people will have to resort to human curation and word-of-mouth rather than search.
Of course, if (big if) it does end up being large enough, the value of getting an invite will get to a point where a member can sell access.
Non-native speaker here: huh, is "you are absolutely right" wrong somehow? I.e., are you a bad english speaker for using it? Fully agree (I guess "fully agree" is the common one?) with this criticism of the article, to me that colloquialism does not sound fishy at all.
There might also be two effects at play:
1. Speech "bubbles" where your preferred language is heavily influenced by where you grew up. What sounds common to you might sound uncommon in Canada.
2. People have been using LLMs for years at this point so what is common for them might be influenced by what they read from LLM output. So while initially it was an LLM colloquialism it could have been popularized by LLM usage.1) to satisfy investors, companies require continual growth in engagement and users
2) the population isn't rocketing upwards on a year-over-year basis
3) the % of the population that is online has saturated
4) there are only so many hours in the day
Inevitably, in order to maintain growth in engagement (comments, posts, likes, etc.), it will have to become automated. Are we there already? Maybe. Regardless, any system which requires continual growth has to automate, and the investor expectations for the internet economy require it, and therefore it has or soon will automate.
Not saying it's not bad, just that it's not surprising.
I feel things are just as likely to get to the point where real people are commonly declared AI, as they are to actually encounter the dead internet.
Innovation outside of rich coorps will end. No one will visit forums, innovation will die in a vacuum, only the richest will have access to what the internet was, raw innovation will be mined through EULAs, people striving to make things will just have ideas stolen as a matter of course.
I think old school meetups, user groups, etc, will come back again, and then, more private communication channels between these groups (due to geographic distance).
I'm thinking stuff like web rings.
Or if you have a blog, maybe also have a curated set of pages you think are good, sort of your bookmarks, that other people can have a look at.
People are still on the internet and making cool stuff, it's just harder to find them nowadays.
Something similar happened in the Podcast and YouTube spheres, where every creator seems to be "sponsored" by these shady companies that allocate 70% of their revenue for creator payouts, for the sake of affiliate marketing.
I really don't know what's solution though.
The API protest in 2023 took away tools from moderators. I noticed increased bot activity after that.
The IPO in 2024 means that they need to increase revenue to justify the stock price. So they allow even more bots to increase traffic which drives up ad revenue. I think they purposely make the search engine bad to encourage people to make more posts which increases page views and ad revenue. If it was easy to find an answer then they would get less money.
At this point I think reddit themselves are creating the bots. The posts and questions are so repetitive. I've unsubscribed to a bunch of subs because of this.
Adding the option to hide profile comments/posts was also a terrible move for several reasons.
Isn't that just fraud?
When are people who buy ads going to realize that the majority of their online ad spend is going towards bots rather than human eye balls who will actually buy their product? I'm very surprised there hasn't been a massive lawsuite against Google, Facebook, Reddit, etc. for misleading and essentially scamming ad buyers
At the moment I am on a personal finance kick. Once in awhile I find myself in the bogleheads Reddit. If you don’t know bogleheads have a cult-like worship of the founder of vanguard, whose advice, shockingly, is to buy index funds and never sell.
Most of it is people arguing about VOO vs VTI vs VT. (lol) But people come in with their crazy scenarios, which are all varied too much to be a bot, although the answer could easily be given by one!
If no human ever used that phrase, I wonder where the ai's learned it from? Have they invented new mannerisms? That seems to imply they're far more capable than I thought they were
Reinforced with RLHF? People like it when they're told they're right.
I'm sure it's happening, but I don't know how much.
Surely some people are running bots on HN to establish sockpuppets for use later, and to manipulate sentiment now, just like on any other influential social media.
And some people are probably running bots on HN just for amusement, with no application in mind.
And some others, who were advised to have an HN presence, or who want to appear smarter, but are not great at words, are probably copy&pasting LLM output to HN comments, just like they'd cheat on their homework.
I've gotten a few replies that made me wonder whether it was an LLM.
Anyway, coincidentally, I currently have 31,205 HN karma, so I guess 31,337 Hacker News Points would be the perfect number at which to stop talking, before there's too many bots. I'll have to think of how to end on a high note.
(P.S., The more you upvote me, the sooner you get to stop hearing from me.)
I call this the "carpet effect". Where all carpets in Morocco have an imperfection, lest it impersonates god.
But really I'm not a professional in this field. I'm sure there are pitfalls in my imagined solution. I just want some traceability from the images used in news articles.
This was on hn this year, and it was, in classic HN fashion, dismissed as a problem in search of a solution. Well, perhaps people in this thread will think differently
But nope. Instead we have meme coins and speculators...
About 10 years ago we had a scenario where bots probably were only 2-5% of the conversation and they absolutely dominated all discussion. Having a tiny coordinated minority in a vast sea of uncoordinated people is 100x more manipulative than having a dead internet. If you ever pointed out that we were being botted, everyone would ignore you or pretend you were crazy. It didn’t even matter that the Head of the FBI came out and said we were being manipulated by bots. Everyone laughed at him the same way.
This was definitively not the case on HackerNews.
Maybe it is a UK thing?
https://en.wikipedia.org/wiki/The_Unbelievable_Truth_(radio_...
I love that BBC radio (today: BBC audio) series. It started before the inflation of 'alternative facts' and it is worth (and very funny and entertaining) to follow, how this show developed in the past 19 years.
I cannot find a place to talk to like-minded people anymore; it's all gamified to sell you something. All folks do on Reddit is talk at you. I'm starting to doubt half the people posting are actual people now... with so many comments that are perfectly formatted and phrased like ChatGPT.
Unfortunately, less privileged users will have to endure the sea of AI content that still preys on the unauthenticated. It will be like using the web without an ad blocker, but 1000X worse.
But it was a long death struggle, bleeding out drop by drop. Who remembers that people had to learn netiquette before getting into conversations? That is called civilisation.
The author of this post experienced the last remains oft that culture in the 00s.
I don't blame the horde of uneducated home users who came after the Eternal September. They were not stupid. We could have built a new culture together with them.
I blame the power of the profit. Big companies rolled in like bulldozers. Mindless machines, fueled by billions of dollars, rolling in the direction of the next ad revenue.
Relationships, civilization and culture are fragile. We must take good take of them. We should. but the bulldozers destroyed every structure they lived in in the Internet.
I don't want to whine. There is a learning: money and especially advertising is poison for social and cultural spaces. When we build the next space where culture can grow, let's make sure to keep the poison out by design.
"We have been waiting 20 minutes!"
"You're absolutely right, and I apologize. We will try to schedule better in the future."
I have never said it to someone with whom I was having a regular discussion.
OTOH, I used to overuse em dashes because the Mac made proper typesetting possible. It used to be the sign that someone had read the very useful The Mac is not a Typewriter by Robin Williams.
I wonder if this does apply to the same magnitude in the real world. It's very easy to see this phenomenon on the internet because it's so vast and interconnected. Attention is very limited and there is so much stuff out there that the average user can only offer minimal attention and effort (the usual 80-20 Pareto allocation). In the real world things are more granular, hyperlocal and less homogeneous.
1. People who live in poorer countries who simply know how to rage bait and are trying to earn an income. In many such countries $200 in ad revenue from Twitter, for example, is significant; and
2. Organized bot farms who are pushing a given message or scam. These too tend to be operated out of poorer countries because it's cheaper.
Last month, Twitter kind of exposed this accidentally with an interesting feature where it showed account location with no warning whatsoever. Interestingly, showing the country in the profile got disabled from government accounts after it raised some serious questions [1].
So I started thinking about the technical feasibility of showing location (country or state for large countries) on all public social media ccounts. The obvious defense is to use a VPN in the country you want to appear to be from but I think that's a solvable problem.
Another thing I read was about NVidia's efforts to combat "smuggling" of GPUs to China with location verification [2]. The idea is fairly simple. You send a challenge and measure the latency. VPNs can't hide latency.
So every now and again the Twitter or IG or Tiktok server would answer an API request with a challenge, which couldn't be antiticpated and would also be secure, being part of the HTTPS traffic. The client would respond to the challenge and if the latency was 100-150ms consistently despite showing a location of Virginia then you can deem them inauthentic and basically just downrank all their content.
There's more to it of course. A lot is in the details. Like you'd have to handle verified accounts and people traveling and high-latency networks (eg Starlink).
You might say "well the phone farms will move to the US". That might be true but it makes it more expensive and easier to police.
It feels like a solvable problem.
[1]: https://www.nbcnews.com/news/us-news/x-new-location-transpar...
[2]: https://aihola.com/article/nvidia-gpu-location-verification-...
I do and so do a number of others, and I like Oxford commas too.
> What if people DO USE em-dashes in real life?
They do and have, for a long time. I know someone who for many years (much longer than LLMs have been available) has complained about their overuse.
> hence, you often see -- in HackerNews comments, where the author is probably used to Markdown renderer
Using two dashes for an em-dash goes back to typewriter keyboards, which had only what we now call printable ASCII and where it was much harder add to add non-ASCII characters than it is on your computer - no special key combos. (Which also means that em-dashes existed in the typewriter era.)
If AI would cost you what it actually costs, then you would use it more carefully and for better purposes.
There may be some irony to be found in this human centipede.
Those sound funny; why would they make you sad?
There are many compounding factors but I experienced the live internet and what we have today is dead.
So I go on my province's subreddit. Politicswise, if there was an election today the incumbent politician would increase their majority and may even be looking at true majority. Hugely popular.
If you find a political thread, there will be 500 comments all agreeing with each other that the incumbent is evil, 50 comments downvoted and censored because they dare have an opinion that agrees with the incumbent, 100 comments deleted by anonymous mods banning people for what reason? Enjoy your echo chamber.
Anyone who experiences being censored a few times will just stop posting. Then when the election happens they have no idea at all why people would ever vote that way because they have never seen anyone do anything but agree with their opinion.
What an utterly dead subreddit.
This is the modern epistemic crisis. And wait till Elon implants a brain computer interface in you. You won't even fully trust your eye looking through a telescope.
The Internet has never been dead. Or alive. Ever since it escaped its comfortable cage in the university / military / small-clique-of-corporations ecosystem and became a thing "anyone" can see and publish on, there has forever been a push-pull between "People wanting to use this to solve their problems" and "People wanting eyeballs on their content, no matter the reason." We're just in an interesting local minimum where the ability to auto-generate human-shaped content has momentarily overtaken the tools search engines (and people with their own brains) use to filter useful from useless, and nobody has yet come up with the PageRank-equivalent nuclear weapon to swing the equation back again.
I'm giving it time, and until it happens I'm using a smaller list of curated sites I mostly trust to get me answers or engage with people I know IRL (as well as Mastodon, which mostly escapes these effects by being bad at transiting novelty from server to server), because thanks to the domain name ownership model pedigree of site ownership still mostly matters.
How sick and tired I am of this take. Okay, people are just bags of bones plus slightly electrified boxes with fat and liquid.
Paying creators is the dumbest and most consequential aspect of modern media. There is no reason to reward creators, zero. They should actually be paying Youtube for access to their audience. They actually would pay to be seen, paying them is both stupid and unnecessary. Kill the incentives and you kill the cancer.
Yeah, I especially hate how paranoid everyone is (but rightly so). I am constantly suspicious of others' perfectly original work being AI, and others are constantly suspicious of my work being AI.
If you define social networks as a graph of connections, fair enough - there's no graph. It is social media though.
HN is Social in the sense that it relies on (mostly) humans considering what other humans would find interesting and posting/commenting for for the social value (and karma) that generates. Text and links are obviously media.
There seems to be an insinuation that HN isn't in the same category as other aggregators and algorithmic feeds. It's not always easy to detect but the bots are definitely among us. HN isn't immune to slop, its just fairly good at filtering the obvious stuff.
I mean sure, the next step will probably be "your ads have been seen by x real users and here are their names, emails, and mobile numbers" :(
As well as verification there must be teams at Reddit/LinkedIn/Whereever working ways to identify ai content so it can be de-ranked.
I am sick of the em-dash slander as a prolific en- and em-dash user :(
Sure for the general population most people probably don't know, but this article is specifically about Hacker News and I would trust most of you all to be able to remember one of:
- Compose, hyphen, hyphen, hyphen
- Option + Shift + hyphen
(Windows Alt code not mentioned because WinCompose <https://github.com/ell1010/wincompose>)
> The Oxford Word of the Year 2025 is rage bait
> Rage bait is defined as “online content deliberately designed to elicit anger or outrage by being frustrating, provocative, or offensive, typically posted in order to increase traffic to or engagement with a particular web page or social media content”.
https://corp.oup.com/news/the-oxford-word-of-the-year-2025-i...
I don't know mind people using AI to create open source projects, I use it extensively, but have a rule that I am responsible and accountable for the code.
Social media have become hellscapes of AI Slop of "Influencers" trying to make quick money by overhyping slop to sell courses.
Maybe where you are from the em dash is not used, but in Queen's English speaking countries the em dash is quite common to represent a break of thought from the main idea of a sentence.
I don’t think LLMs and video/image models are a negative at all. And it’s shocking to me that more people don’t share this viewpoint.
Maybe the future will be dystopian and talking to a bot to achieve a given task will be a skill? When we reach the point that people actually hate bots, maybe that will be a turning point?
1. There are channels specialized in topics like police bodycam and dashcam videos, or courtroom videos. AI there is used to generate voice (and sometimes a very obviously fake talking head) and maybe the script itself. It seems a way to automatize tasks.
2. Some channels are generating infuriating videos about fake motorbikes releases. Many.
Think of the children!!!
dude, hate to break it to you but the fact that it's your "one and only" makes it more convincing it's your social network. if you used facebook, instagram, and tiktok for socializing, but HN for information, you would have another leg to stand on.
yes, HN is "the land of misfit toys", but if you come here regularly and participate in discussions with other other people on a variety of topics and you care about the interactions, that's socializing. The only reason you think it's not is that you find actual social interaction awkward, so you assume that if you like this it must not be social.
Just absolutely loved it. Everyone was wondering how deepfakes are going to fool people but on HN you just have to lie somewhere on the Internet and the great minds of this site will believe it.
It used to be Internet back when the name was still written in the capital first letter. The barrier to utilize the Internet was high enough that mostly only the genuinely curious and thoughtful people a) got past it and b) did have the persistence to find interesting stuff to read and write about on it.
I remember when TV and magazines were full of slop of the day at the time. Human-generated, empty, meaningless, "entertainment" slop. The internet was a thousand times more interesting. I thought why would anyone watch a crappy movie or show on TV or cable, created by mediocre people for mere commercial purposes, when you could connect to a lone soul on the other side of the globe and have intelligent conversations with this person, or people, or read pages/articles/news they had published and participate in this digital society. It was ethereal and wonderful, something unlike anything else before.
Then the masses got online. Gradually, the interesting stuff got washed in the cracks of commercial internet, still existing but mostly just being overshadowed by everything else. Commercial agenda, advertisements, entertainment, company PR campaigns disguised as articles: all the slop you could get without even touching AI. With subcultures moving from Usenet to web forums, or from writing web articles to posting on Facebook, the barrier got lowered until there was no barrier and all the good stuff got mixed with the demands and supplies of everything average. Earlier, there always were a handful of people in the digital avenues of communication who didn't belong but they could be managed; nowadays the digital avenues of communication are open for everyone and consequently you get every kind of people in, without any barriers.
And where there are masses there are huge incentives to profit from them. This is why internet is no longer an infrastructure for the information superhighway but for distributing entertainment and profiting from it. First, transferring data got automated and was dirt cheap, now creating content is being automated and becomes dirt cheap. The new slop oozes out of AI. The common denominator of internet is so low the smart people get lost in all the easily accessed action. Further, smart people themselves are now succumbing in it because to shield yourself from all the crap that is the commercial slop internet you basically have to revert to being a semi-offline hermit, and that goes against all the curiosity and stimuli deeply associated with smart people.
What could be the next differentiator? It used to be knowledge and skill: you had to be a smart person to know enough and learn enough to get access. But now all that gets automated so fast that it proves to be no barrier.
Attention span might be a good metric to filter people into a new service, realm, or society eventhough, admittedly, it is shortening for everyone but smart people would still win.
Earlier solutions such as Usenet and IRC haven't died but they're only used by the old-timers. It's a shame because then the gathering would miss all the smart people grown in the current social media culture: world changes and what worked in the 90's is no longer relevant except for people who were there in the 90's.
Reverting to in-real-life societies could work but doesn't scale world-wide and the world is global now. Maybe some kind of "nerdbook": an open, p2p, non-commercial, not centrally controlled, feedless facebook clone could implement a digital club of smart people.
The best part of setting up a service for smart people is that it does not need to prioritize scaling.
Reminds me of those times in Germany when mainstream media and people with decades in academia used the term "Putin Versteher" (Person who gets Putin, Putin 'understander') ad nauseaum ... it was hilarious.
Unrelated to that, sometime last year, I searched "in" ChatGPT for occult stuff in the middle of a sleepless night and it returned a story about "The Discordians", some dudes who ganged up in a bowling hall in the 70's and took over media and politics, starting in the US and growing globally.
Musk's "Daddy N** Heil Hitler" greeting, JD's and A. Heart's public court hearings, the Kushners being heavily involved with the recruitment department of the Epsteins Islands and their "little Euphoria" clubs as well as Epstein's "Gugu Gaga Cupid" list of friends and friends of friends, it's all somewhat connected to "The Discordians", apparently.
It was a fun "hallucination" in between short bits on Voodoo, Lovecraft and stuff one rarely hears about at all.
Recently someone accused me of being a clanker on Hackernews (firstly lmao but secondly wow) because of my "username" (not sure how it's relevant, When I had created this account I had felt a moral obligation to learn/ask for help to improving and improve I did whether its in writing skills or learning about tech)
Then I had posted another comment on Another thread in here which was talking about something similar. The earlier comment got flagged and my response to it but this stayed. Now someone else saw that comment and accused me of being AI again
This pissed me off because I got called AI twice in 24 hours. That made me want to quit hackernews because you can see from my comments that I write long comments (partially because they act as my mini blog and I just like being authentic me, this is me just writing my thoughts with a keyboard :)
To say that what I write is AI feels such a high disrespect to me because I have spent some hours thinking about some of the comments I made here & I don't really care for the upvotes. It's just this place is mine and these thoughts are mine. You can know me and verify I am real by just reading through the comments.
And then getting called AI.... oof, Anyways, I created a tell HN: I got called clanker twice where I wrote the previous thing which got flagged but I am literally not kidding but the first comment came from an AI generated bot itself (completely new account) I think 2 minute or something afterwards which literally just said "cool"
Going to their profile, they were promoting some AI shit like fkimage or something (Intentionally not saying the real website because I don't want those bots to get any ragebait attention to conversions on their websites)
So you just saw the whole situation of irony here.
I immediately built myself a bluesky thread creator where I can write a long message and it would automatically loop or something (ironically built via claude because I don't know how browser extensions are made) just so that I can now write things in bluesky too.
Funny thing is I used to defend Hackernews and glorify it a few days ago when an much more experienced guy called HN like 4chan.
I am a teenager, I don't know why I like saying this but the point is, most teenagers aren't like me (that I know), it has both its ups and downs (I should study chemistry right now) but Hackernews culture was something that inspired me to being the guy who feels confidence in tinkering/making computers "do" what he wants (mostly for personal use/prototyping so I do use some parts of AI, you can read one of my other comments on why I believe even as an AI hater, prototyping might make sense with AI/personal-use for the most part, my opinion's nuanced)
I came to hackernews because I wanted to escape dead internet theory in the first place. I saw people doing some crazy things in here reading comments this long while commuting from school was a vibe.
I am probably gonna migrate to lemmy/bluesky/the federated land. My issue with them is that the ratio of political messages : tech content is few (And I love me some geopolitics but quite frankly I am tired and I just want to relax)
But the lure of Hackernews is way too much, which is why you still see me here :)
I don't really know what the community can do about bots.
Another part is that there is this model on Localllama which I discovered the other day which works opposite (so it can convert LLM looking text to human and actually bypasses some bot checking and also the -- I think)
Grok (I hate grok) produces some insanely real looking texts, it still has -- but I do feel like if one even removes it and modifies it just a bit (whether using localllama or others), you got yourself genuine propaganda machine.
I was part of a Discord AI server and I was shocked to hear that people had built their own LLM/finetunes and running them and they actually experimented with 2-3 people and none were able to detect.
I genuinely don't know how to prevents bots in here and how to prevent false positives.
I lost my mind 3 days ago when this happened. Had to calm myself and I am trying to use Hackernews (less) frequently, I just don't know what to say but I hope y'all realize how it put a bad taste into my mouth & why I feel a little unengaged now.
Honestly, I am feeling like writing my own blogs to my website from my previous hackernews comments. They might deserve a better place too.
Oops wrote way too long of a message, so sorry about that man but I just went with the flow and thanks man for writing this comment so that i can finally have this one comment to try to explain how I was feeling man.
The 'Dead Internet' (specifically AI-generated SEO slop) has effectively broken traditional keyword search (BM25/TF-IDF). Bad actors can now generate thousands of product descriptions that mathematically match a user's query perfectly but are semantically garbage/fake.
We had to pivot our entire discovery stack to Semantic Search (Vector Embeddings) sooner than planned. Not just for better recommendations, but as an adversarial filter.
When you match based on intent vectors rather than token overlap, the 'synthetic noise' gets filtered out naturally because the machine understands the context, not just the string match. Semantic search is becoming the only firewall against the dead internet.