Ironically, LLMs might end up forcing us back toward more distinct voices because sameness has become the default background.
People believe this and continue to get fooled by LLMs all day.
That's always been the somethingawful crowd's stance since, what, 2000ish?
But I find this take interesting. The brewing of a new kind of counter culture that forces humans to express themselves creatively. Hopefully it doesn't get too radical.
I agree.
LLMs are like blackface for dumbfucks: LLMs let the profoundly retarded put on the makeup and airs of the literati so they can parade around self-identifying as if they have a clue.
If you don't like the barbs in this kind of writing prepare for more anodyne corporate slop. Every downvote signals to the algorithm that you prefer mediocrity.
I’m not mourning it.
People were posting Medium posts rewriting someone else's content, wrongly, etc.
The sculpting force of algorithms is bite sized zingers, hot takes, ragebait, and playing to the analytics
A lot of us spent years optimizing for clarity, SEO, professionalism etc. But that did shape how we wrote, maybe even more than our natural cadence. The result wasn’t voice, it was everyone converging on the safe and optimized template.
As soon as I know something is written by AI I tune out. I don't care how good it is - I'm not interested if a person didn't go through the process of writing it
How could it be verbatim the same response you got? Even if you both typed the exact same prompt, you wouldn't get the exact same answer.[0, 1]
[0] https://kagi.com/assistant/8f4cb048-3688-40f0-88b3-931286f8a...
[1] https://kagi.com/assistant/4e16664b-43d6-4b84-a256-c038b1534...
I wish more people held the same opinion actually. Unfortunately, my sense is that most people don't care, they are fine with LLM generated crap
It honestly makes me want to blow my brains out
That's not to say computers can't generate beautiful things, but unless you expand the context out to include the history of how a program that can create such art came to be, the output is not meaningful. This is why people do not react well to AI art made from simply throwing prompts at a model, or writing that does not feel like it has style, struggle, or any personal flavor.
I've always believed that LLMs will be able to fake it perfectly one day. But as a music fan, no fully computer-generated music will ever bring me the range of emotion and joy that another human's story and creative process through that story does.
For me personally, this means that I read less on the internet and more pre-LLM books. It's a sad development nevertheless.
There’s an argument that the creator is just using AI as a tool to achieve their vision. I do not think that’s how people using AI are actually engaging with it at scale, nor is it the desired end state of people pushing AI. To put it bluntly, I think it’s cope. It’s how I try to use AI in my work but it’s not how I see people around me using it, and you don’t get the miracle results boosters proclaim from the rooftop if you use it that way.
It does seem that LLMs could avoid this detection with some superficial tweaks such as injecting poor grammar and reducing peppiness. I hope it doesn't get to the point that I have to become suspicious of all text.
Speak for yourself. Some of the most fascinating poetry I have seen was produced by GPT-3. That is to say, there was a short time period when it was genuinely thought-provoking, and it has since passed. In the age of "alignment," what you get with commerical offerings is dog shite... But this is more a statement on American labs (and to a similar extent, the Chinese whom have followed) than on "computers" in the first place. Personally, I'm looking forward to the age of computational literature, where authors like me would be empowered to engineer whole worlds, inhabited by characters ACTUALLY living in the computer. (With added option of the reader playing one of the parts.) This will radically change how we think about textual form, and I cannot wait for compute to do so.
Re: modern-day slop, well, the slop is us.
Denial of this comes from a place of ignorance; let the blinkers off and you might learn something! Slop will eventually pass, but we will remain. This is the far scarier proposition.
It's hard to imagine these feeling like characters from literature and not characters in the form of influencers / social media personalities. Characters in literature are in a highly constrained medium, and only have to do their story once. In a generated world the character needs to be constantly doing "story things". I think Jonathan Blow has an interesting talk on why video games are a bad medium for stories, which might be relevant.
So you want sapient, and possibly sentient, beings created solely for entertainment? Their lives constrained to said entertainment? And you'd want to create them inside of a box that is even more limited than the space we live in?
My idea of godhood is to first try to live up to a moral code that I'd be happy with if I was the creation and something else was the god.
If this isn't what you meant, then yes, choose your own adventure is fun. But we can do that now with shared worlds involving other humans as co-content creators.
Art is something out of the norm, and it should make some sense at some clever level.
But if there was AI that truly could do that, I would love to see it, and would love to see even more of it.
It can be clearly seen, if you try to ask AI to make original jokes. These usually aren't too good, if they are good it's because they were randomly lucky somehow. It is able to come up with related analogies for the jokes, but this is just simple pattern matching of what is similar to the other thing, not insightful and clever observation.
I've had a lot of luck using GPT5 to interrogate my own writing. A prompt I use (there are certainly better ones): "I'm an editor considering a submitted piece for a publication {describe audience here}. Is this piece worth the effort I'll need to put in, and how far will I need to cut it back?". Then I'll go paragraph by paragraph asking whether it has a clear topic, flows, and then I'll say "I'm not sure this graf earns its keep" or something like that.
GPT5 and Claude will always respond to these kinds of prompts with suggested alternative language. I'm convinced the trick to this is never to use those words, even if they sound like an improvement over my own. At the first point where that happens, I get dial my LLM-wariness up to 11 and take a break. Usually the answer is to restructure paragraphs, not to apply the spot improvement (even in my own words) the LLM is suggesting.
LLMs are quite good at (1) noticing multi-paragraph arcs that go nowhere (2) spotting repetitive word choices (3) keeping things active voice and keeping subject/action clear (4) catching non-sequiturs (a constant problem for me; I have a really bad habit of assuming the reader is already in my head or has been chatting with me on a Slack channel for months).
Another thing I've come to trust LLMs with: writing two versions of a graf and having it select the one that fits the piece better. Both grafs are me. I get that LLMs will have a bias towards some language patterns and I stay alert to that, but there's still not that much opportunity for an LLM to throw me into "LLM-voice".
Like, sure, it's possible to do this with an LLM, but it's also possible to do it without, at roughly similar levels of effort, without contributing to all of the negative externalities of the LLM/genAI ecosystem.
It makes the writing process faster and more enjoyable, despite never using anything the LLM generates directly.
Workshopping with humans is even better, if you find the right humans, but they have an annoying habit of not being available 24/7.
In case it helps anyone, here is my prompt:
"You are a professional writer and editor with many years of experience. Your task is to provide writing feedback, point out issues and suggest corrections. You do not use flattery. You are matter of fact. You don't completely rewrite the text unless it is absolutely necessary - instead you try to retain the original voice and style. You focus on grammar, flow and naturalness. You are welcome to provide advice changing the content, but only do that in important cases.
If the text is longer, you provide your feedback in chunks by paragraph or other logical elements.
Do not provide false praise, be honest and feel free to point out any issues."
(Yes, you kind of need to repeat you're actively not looking for a pat on the back, otherwise it keeps telling you how brilliant your writing is instead of giving useful advice.)
I wonder if this is due to LLMs being trained on persuasive writing.
It's not just LLMs, it's how the algorithms promote engagement. i.e. rage bait, videos with obvious inaccuracies etc. Who gets rewarded, the content creators and the platform. Engaging with it just seems to accentuate the problem.
There needs to be algorithms that promote cohorts and individuals preferences.
Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
I guess, but I'm on quite a few "algorithm-free" forums where the same thing happens. I think it's just human nature. The reason it's under control on HN is rigorous moderation; when the moderators are asleep, you often see dubious political stuff bubble up. And in the comments, there's often a fair amount of patently incorrect takes and vitriol.
Some of that you may experience as 'dubious political stuff' and 'patently incorrect takes'.
Edit, just to be clear: I'm not saying HN should be unmoderated.
Since they are relatively open, at some point comes in someone that doesn't give care about anything or it's extremely vocal about something and... there goes the nice forum.
We were three friends: a psychology major, a recovering addict, and then a third friend with no background for how these sorts of behavioral addictions might work. Our third friend really didn't "get it" on a fundamental level. If any game had anything like a scoreboard, or a reward for input, he'd say "it's crack points!" We'd roll our eyes a bit, but it was clear that he didn't understand that certain reward schedules had a very large effect on behavior, and not everything with some sort of identifiable reward was actually capable of producing behavioral addiction.
I think of this a lot on HN. People on HN will identify some surface similarity, and then blithely comment "see, this is nothing new, you're either misguided or engaged in some moral panic." I'm not sure what the answer is, but if you cannot see how an algorithmic, permanently-scrolling feed differs from people being rude in the old forums, then I'm not sure what would paint the picture for you. They're very different, and just because they might share some core similarity does not actually mean they operate the same way or have the same effects.
I had two of my Bluesky posts on AI being attacked by all kinds of random people which in turn has also lead to some of those folks sending me emails and dragging some of my lobster and hackernews comments into online discourse. A not particularly enjoyable experience.
I’m sure one can have that same experience elsewhere, but really it’s Bluesky where I experienced this on a new level personally.
That's truly all I need.
They are destroying our democratic societies and should be heavily regulated. The same will become true for AI.
By who, exactly? It’s easy to call for regulation when you assume the regulator will conveniently share your worldview. Try the opposite: imagine the person in charge is someone whose opinions make your skin crawl. If you still think regulation beats the status quo, then the call for regulation is warranted, but be ready to face the consequences.
But if picturing that guy running the show feels like a disaster, then let’s be honest: the issue isn’t the absence of regulation, it’s the desire to force the world into your preferred shape. Calling it “regulation” is just a polite veneer over wanting control.
With or without social networks this anger will go somewhere, don't think regulation alone can fix that. Let's hope it will be something transformative not in the world ending direction but in the constructive direction.
N.B. Still employed btw.
This is literally how most of the world uses LinkedIn
I never understand why people feel compelled to delete their entire account to avoid reading the feed. Why were you even visiting the site to see the feed if you didn’t want to see the feed?
I never signed up for Facebook or Twitter. My joke is I am waiting until they become good. They are still shitty and toxic from what I can tell from the outside, so I'll wait a little longer ;-)
Twitter was an incredible place from 2010 to 2017. You could randomly message something and they would more often than not respond. Eventually an opportunity would come and you’d meet in person. Or maybe you’d form an online community and work towards a common goal. Twitter was the best place on the internet during that time.
Facebook as well had a golden age. It was the place to organize events, parties, and meetups, before instagram and DMs took over. Nothing beats seeing someone post an album from last nights party and messaging your friends asking them if they remember anything that happened.
I know being cynical is trendy, but you genuinely missed out. Social dynamics have changed. Social media will never be as positive on an individual level as it was back then.
Something like Instagram where you have to meet with the other party in person to follow each other and a hard limit on the number of people you follow or follow you (say, 150 each) could be an interesting thing. It would be hard to monetize, but I could see it being a positive force.
Actually, I deleted my account there before, as twitter sent me spam mail trying to lecture me what I write. There was nothing wrong with what I wrote - twitter was wrong. I can not accept AI-generated spam by twitter, so I went away. Don't really miss it either, but Elon really worsened the platform significantly with his antics.
> Just because I said to someone 'Brexit was dumb', I don't expect to get fed 1000 accounts talking about it 24/7. It's tedious and unproductive.
Yeah, I can relate to this, but mostly what annoyed me was that twitter interfered "we got a complaint about you - they are right, you are a troublemaker". I don't understand why twitter wants to interfere into communication. Reddit is even worse, since moderators have such a wild range of what is "acceptable" and what is not. Double-standards everywhere on reddit.
As so many have said, enragement equals engagement equals profit.
All my social media accounts are gone as well. They did nothing for me and no longer serve any purpose.
TBF Bluesky does offer a chronological feed, but the well-intentioned blocklists just became the chief tool for the mean girls of the site.
> but the well-intentioned blocklists just became the chief tool for the mean girls of the site.
I've never used it, but yes this is what I expected. It would be better to have topical lists that users could manually choose to follow or block. This would avoid quite a bit of the "mean girl" selectivity. Though I suppose you'd get some weird search-engine-optimization like behavior from some of the list curators (even worse if anyone could add to the list).
I’m not the biggest Twitter user but I didn’t find it that difficult to get what I wanted out of it.
You already discovered the secret: You get more of what you engage with. If you don’t want to hear a lot of Brexit talk, don’t engage with Brexit content. Unfollow people who are talking a lot about Brexit
If you want to see more of something, engage with it. Click like. Follow those people. Leave a friendly comment.
On the other hand, some people are better off deleting social media if they can’t control their impulses to engage with bait. If you find yourself getting angry at the Brexit content showing up and feeling compelled to add your two cents with a comment or like, then I suppose deleting your account is the only viable option.
The algorithm doesn’t show you “more of the things you engage with”, and acting like it does makes people think what they’re seeing is a reflection of who they are, which is incorrect.
The designers of these algorithms are trying to figure out which “mainstream category” you are. And if you aren’t in one, it’s harder to advertise to you, so they want to sand down your rough edges until you fit into one.
You can spend years posting prolificly about open source software, Blender and VFX on Instagram, and the algorithm will toss you a couple of things, but it won’t really know what to do with you (aside from maybe selling you some stock video packages).
But you make one three word comment about Brexit and the algorithm goes “GOTCHA! YOU’RE ANTI-BREXIT! WE KNOW WHAT TO DO WITH THAT!” And now you’re opted into 3 bug ad categories and getting force-fed ragebait to keep you engaged, since you’re clearly a huge poltical junky. Now your feed is trash forever, unless you engage with content from another mainstream category (like Marvel movies or one of the recent TikTok memes).
That is really limiting though. I do not want to see Brexit ragebait in my threads, but I am quite happy to engage in intelligent argument about it. The problem is that if, for example, a friend posts something about Brexit I want to comment on, my feed then fills with ragebait.
My solution is to bookmark the friends and groups pages, and the one group I admin and go straight to those. I have never used the app.
I'll only use an LLM for projects and building tools, like a junior dev in their 20s.
i've learned pretty well how to 'guide' the algorithm so the tech stuff that's super valuable (to me) does not vanish, but still get nonsense bozo posts in the mix.
I dug out my old PinePhone and decided to write a toy OS for it. The project has just the right level of challenge and reward for me, and feels more like early days hacking/programming where we relied more on documentation and experimentation than regurgitated LLM slop.
Nothing beats that special feeling when a hack suddenly works. Today was just a proximity sensor reading displayed, but it invloved a lot of SoC hacking to get that far.
I know there are others hacking hard in obscure corners of tech, and I love this site for promoting them.
How LLMs standardize communication is the same way there was a standardization in empires expanding (cultural), book printing (language), the industrial revolution (power loom, factories, assembly procedures, etc).
In that process interesting but not as "scale-able" (or simply not used by the people in power) culture, dialects, languages, craftsmanship, ideas were often lost - and replaced by easier to produce, but often lesser quality products - through the power of "affordable economics" - not active conflict.
We already have the English 'business concise, buzzwordheavy language' formal messaging trained into chatGPT (or for informal the casual overexcited American), which I'm afraid might take hold of global communication the same way with advanced LLM usage.
Explain to me how "book printing" of the past "standardized communication" in the same way as LLMs are criticized for homogenizing language.
Everyone has the same few dictionary spellings (that are now programmed into our computers). Even worse (from a heterogeneity perspective), everyone also has the same few grammar books.
As examples: How often do you see American English users write "colour", or British English users write "color", much less colur or collor or somesuch?
Shakespeare famously spelled his own last name half a dozen or so different ways. My own patriline had an unusual variant spelling of the last name, that standardized to one of the more common variants in the 1800s.
https://en.wikipedia.org/wiki/History_of_English_grammars
"Bullokar's grammar was faithfully modelled on William Lily's Latin grammar, Rudimenta Grammatices (1534).[9] Lily's grammar was being used in schools in England at the time, having been "prescribed" for them in 1542 by Henry VIII.[5]"
It goes on to mention a variety of grammars that may have started out somewhat descriptive, but became more prescriptive over time.
What personally disturbs me the most is the self censorship that was initially brought forward by TikTok and quickly spread to other platforms - all in the name of being as advertiser friendly as possible.
LinkedIn was the first platform where I really observed people losing their unique voice in favor of corporate friendly - please hire me - speak. Now this seems to be basically any platform. The only platform that seems to be somewhat protected from it is Reddit, where many mods seem to dislike LLMs as much as everybody else. But more likely, its just less noticeable
I think that’s even too soon! YouTube has had rules around being advertising friendly for longer than TikTok has existed. And the FCC has fined swearing on public broadcasts for like 50+ years.
But I do agree, we’re attributing too much to LLMs. We don’t see personal, human-oriented content online because social media is just not about community.
1. Young people (correctly) realized they could make lots of money being influencers on social media. TikTok does make that easier than ever. I have close friends who make low 6 figures streaming on TikTok (so obviously they quit the low wage jobs they were doing before).
2. People have been slowly waking up to the fact that social media has always been pretty fake. I quit 6 years ago, and most of my friends have slowly reduced how much they use it. All of the platforms are legally incentivized to only care about profit and engagement. Capitalism doesn’t allow a company to care about community and personal voice, if algorithmic feeds of influencers will make them more money.
There’s still good content out there if you know where to look. But digital human connection happens in group chats, DMs, and FaceTime, not on public social media.
what if we flip LLMs into voice trainers? Like, use them to brainstorm raw ideas and rewrite everything by hand to sharpen that personal blade. atrophy risk still huge?
Nudge to post more of my own mess this week...
Don't look at social media. Blogging is kinda re-surging. I just found out Dave Barry has a substack. https://davebarry.substack.com/ That made me happy :) (Side note, did he play "Squirrel with a Gun??!!!")
The death of voice is greatly exaggerated. Most LLM voice is cringe. But it's ok to use an LLM, have taste, and get a better version of your voice out. It's totally doable.
I don't judge, I'm not an artist so if I wanted to express myself in image I'd need AI help but I can see how people would do the same with words.
Of course, there might be hundreds of AI comments that pass my scrutiny because they are convincing enough.
Frankly, it only takes someone a few times to "fall" for an LLM article -- that is, to spend time engaging with an author in good faith and try to help improve their understanding, only to then find out that they shat out a piece of engagement bait for a technology they can barely spell -- to sour the whole experience of using a site. If it's bad on HN, I can only imagine how much worse things must be on Facebook. LLMs might just simply kill social media of any kind.
These kinds of posts regularly hit the top 10 on HN, and every time I see one I wonder: "Ok, will this one be just another staid reiteration of an obvious point?"
Why do it at all if I won't do better than the AI?
The worst risk with AI is not that it replaces working artists, but that it dulls human creativity by killing the urge to start.
I am not sure who said it first, but every photographer has ten thousand bad photos in them and it's easier if they take them at the beginning. For photographers, the "bad" is not the technical inadequacy of those photos; you can get past that in the first one hundred. The "bad" is the generic, uninteresting, uninspiring, underexplored, duplicative nature of them. But you have to work through that to understand what "good" is. You can't easily skip these ten thousand photos, even if your analysis and critique skills are strong.
There's a lot to be lost if people either don't even start or get discouraged.
But for writing, most of the early stuff is going to read much like this sort of blog post (simply because most bloggers are stuck in the blogging equivalent of the ten thousand photos; the most popular bloggers are not those elevating writing).
"But it looks like AI" is the worst, most reflexive thing about this, because it always will, since AI is constantly stealing new things. You cannot get ahead of the tireless thief.
The damage generative AI will do to our humanity has only just started. People who carry on building these tools knowing what they are doing to our culture are beneath our contempt. Rampantly overcompensated, though, so they'll be fine.
How do you know? A lot of the stuff I see online could very much be produced by LLMs without me ever knowing. And given the economics I suspect that some of it already is.
https://rmoff.net/2025/11/25/ai-smells-on-medium/
He doesn't link many examples, but at the end he gives the example of an author pumping out +8 articles in a week across a variety of topics. https://medium.com/@ArkProtocol1
I don't spend time on medium so I don't personally know.
For myself, I have been writing, all my life. I tend to write longform posts, from time to time[0], and enjoy it.
That said, I have found LLMs (ChatGPT works best for me) to be excellent editors. They can help correct minor mistakes, as long as I ignore a lot of their advice.
The few ones who have something important to say they will, and we will listen regardless of the medium.
People will spend time on things that serve utility AND are calorifically cheap. Doomscrolling is a more popular past time than say - completing Coursera courses.
Economy is shit? Lets throw out the immigrants because they are the problem and lets use the most basic idea of taxing everything to death.
No one wants to hear hart truths and no one wants to accept that even as adults, they might just not be smart. Just beause you became an adult, your education shuld still matter (and i do not mean having one degree = expert).
There are skilled writers. Very skilled, unique writers. And I'm both exceedingly impressed by them as well as keenly aware that they are a rare breed.
But there's so many people with interesting ideas locked in their heads that aren't skilled writers. I have a deep suspicion that many great ideas have gone unshared because the thinker couldn't quite figure out how to express it.
In that way, perhaps we now have a monotexture of writing, but also perhaps more interesting ideas being shared.
Of course, I love a good, unique voice. It's a pleasure to parse patio11's straussian technocratic musings. Or pg's as-simple-as-possible form.
And I hope we don't lose those. But somehow I suspect we may see more of them as creative thinkers find new ways to express themselves. I hope!
I could agree with you in theory, but do you see the technology used that way? Because I definitely don't. The thought process behind the vast majority of LLM-generated content is "how do I get more clicks with less effort", not "here's a unique, personal perspective of mine, let's use a chatbot to express it more eloquently".
It's not some magic roadblock. They just didn't want to spend the effort to get better at writing; you get better at writing by writing (like good old Steve says in "On Writing"). It's how we all learnt.
I'm also not sure everyone should be writing articles and blog posts just because. More is not better. Maybe if you feel unmotivated about making the effort, just don't do it?
Almost everyone will cut novice writers and non-native $LANGUAGE speakers some slack. Making mistakes is not a sin.
Finally, my own bias: if you cannot be bothered to write something, I cannot be bothered to read it. This applies to AI slop 100%.
Writing is one of the most accessible forms of expression. We were living in a world where even publishing was as easy as imaginable - sure, not actually selling/profiting, but here’s a secret, even most bestselling authors have either at least one other job, or intense support from their close social circle.
What you do to write good is you start by writing bad. And you do it for ages. LLMs not only don’t help here, they ruin it. And they don’t help people write because they’re still not writing. It just derails people who might, otherwise, maybe start actually writing.
Framing your expensive toy that ruins everything as an accessibility device is absurd.
They aren't your ideas if its coming out of an LLM
Worse is better.
A unique, even significantly superior, voice will find it hard to compete against the pure volume of terrible non unique LLM generated voices.
Worse is better.
* 28% of U.S. adults are at or below "level 1" literacy, essentially meaning people unable to function in an environment that requires written language skills.
* 54% of U.S. adults read below a sixth-grade level.
These statistics refer to an inability to interpret written material, much less create it. As to the latter, a much smaller percentage of U.S. adults can compose a coherent sentence.
We're moving toward a world where people will default to reliance on LLMs to generate coherent writing, including college students, who according to recent reports are sometimes encouraged to rely on LLMs to complete their assignments.
If we care to, we can distinguish LLM output from that of a typical student: An LLM won't make the embarrassing grammatical and spelling errors that pepper modern students' prose.
Yesterday I saw this headline in a major online media outlet: "LLMs now exceed the intelect [sic] of the average human." You don't say.
There's a data centre somewhere in the US running additions and multiplications through a block of numbers that has captured my voice.
Others respond in the same style. As a result, it ends up with long, multi-paragraph messages full of em dashes.
Basically, they are using AI as a proxy to communicate with each other, trying to sound more intelligent to the rest of the group.
I don't disagree, but LLMs happened to help with standardizing some interesting concepts that were previously more spread out as concepts ( drift, scaffolding, and so on ). It helps that chatgpt has access to such a wide audience to allow that level of language penetration. I am not saying don't have voice. I am saying: take what works.
What do you mean? The concepts of "drift" and "scaffolding" were uncommon before LLMs?
Not trying to challenge you. Honestly trying to understand what you mean. I don't think I have heard this ever before. I'd expect concepts like "drift" and "scaffolding" to be already very popular before LLMs existed. And how did you pick those two concepts of aaallll... the concepts in this world?
Talking to some friends and they feel the same. Depending where you are participating a discussion you just might not feel it is worth it because it might just be a bot
I agree I think we should try to do both.
In germany for example, we have very few typical german brands. Our brands became very global. If you go Japan for example, you will find the same product like ramen or cookies or cakes a lot but all of them are slighly different from different small producers.
If you go to an autobahn motorway/highway rest area you will find local products in japan. If you do the same in germany, you find just the generic american shit, Mars, Modneles, PepsiCo, Unilever...
Even our german coke like Fritz cola is a niche / hipster thing even today.
I have always had a very idiosyncratic way of expressing myself, one that many people do not understand. Just as having a smartphone has changed my relationship to appointments - turning me into a prompt and reliable "cyborg" - LLMs have made it possible for me to communicate with a broader cross section of people.
I write what I have to say, I ask LLMs for editing and suggestions for improvement, and then I send that. So here is the challenge for you: did I follow that process this time?
I promise to tell the truth.
And whose to say your idiosyncratic expressions wouldn't find an audience as it changes over time? Just you saying that makes me curious to read something you wrote.
At some point, generation breaks a social contract that I'm using my energy and attention consuming something that another human spent their energy and attention creating.
In that case I'd rather read the prompt the human brain wrote, or if I have to consume it, have an LLM consolidate it for me.
The discomfort and annoyance that sentence generates, is interesting. Being accused of being a bot is frustrating, while interacting with bots creates a sense of futility.
Back in the day when Facebook first was launched, I remember how I felt about it - the depth of my opposition. I probably have some ancient comments on HN to that effect.
Recently, I’ve developed the same degree of dislike for GenAI and LLMs.
Improve grammar and typos in my draft but don't change my writing style.
Your mileage may vary.Skill becomes expensive mechanized commodity
old code is left to rot while people try to survive
we lose our history, we lose our dignity.
In any case, as someone who experimented with AI for creative writing, LLM _do not destroy_ your voice; it does flatten your voice, but with minimal effort you can make it sound the way you find reflects you thought best.
There's a lot of talk over whether LLMs make discourse 'better' or 'worse', with very little attention given to the crisis we were having with online discourse before they came around. Edelman was astroturfing long before GPT. Fox 'news' and the spectrum of BS between them and the NYT (arranged by how sophisticated they considered their respective pools of rubes to be) have always, always been propaganda machines and PR firms at heart wearing the skin of journalism like buffalo bill.
We have needed to learn to think critically for a very long time.
Consider this; if you are capable of reading between the lines, and dealing with what you read or hear on the merits of the thoughts contained therein, then how are you vulnerable to slop? If it was written by an AI (or a reporter, or some rando on the internet) but contains ideas that you can turn over and understand critically for yourself, is it still slop? If it's dumb and it works, it's not dumb.
I'm not even remotely suggesting that AI will usher in a flood of good ideas. No, it's going to be used to pump propaganda and disseminate bullshit at massive scale (and perhaps occasionally help develop good ideas).
We need to inoculate ourselves against bullshit, as a society and a culture. Be a skeptic. Ironnman arguments against your beliefs. Be ready to bench test ideas when you hear them and make it difficult for nonsense to flourish. It is (and has been) high time to get loud about critical thinking.
Here's why:
I consider myself an LLM pragmatist. I use them where they are useful, and I educate people on them and try to push back on all the hype marketing disguised as futurism from LLM creators.
And now when I see these emoji fests I instantly lose interest and trust in the content of the email. I have to spend time sifting through the fluff to find what’s actually important.
LLMs are creating an assymetric imbalance in effort to write vs effort to read. What is taking my coworkers probably a couple minutes to draft requires me 2-3x as long to decipher. That imbalance used to be the opposite.
I’ve raised the issue before at work and one response I got was to “use AI to summarize the email.” Are we really spending all this money and energy on the worlds worst compression algorithm?
there's enough potential and wiggle room but people align, even when they don't, just to align.
when Rome was flourishing, only a few saw what was lingering in the cracks but when in flourishing Rome ...
Social media already lost that nearly two decades ago - it died as content marketing rose to life.
Don't blame on LLMs what we've long lost due to cancer that is advertising[0].
And don't confuse GenAI as a technology with what the cancer of advertising coopts it to. The root of the problem isn't in the generative models, it's in what they're used for - and the problem uses aren't anything new. We've been drowning in slop for decades, it's just that GenAI is now cheaper than cheap labor in content farms.
--
[0] - https://jacek.zlydach.pl/blog/2019-07-31-ads-as-cancer.html
That's like giving weapons to everybody in the world for free, and asking to be blamed for the increased deaths and violence.
We improve our use of words when we work to improve our use of words.
We improve how we understand by how we ask.
- "Hey, Jimmy, the cookie jar is empty. Did you eat the cookies?"
- "You're absolutely right, father — the jar seems to be empty. Here is bullet point list why consuming the cookies was the right thing to do..."
2) People who use LLMs for understanding
I think I'll stick to 2) for many reasons.
I suppose when your existence is in the cloud, the fall back to earth can look scary. But it's really only a few inches down. You'll be ok.
Of course there are also horrible use of AI, liars, scummy cheaters and fake videos on youtube, owned by a greedy mega-corporation that sold its soul to AI. So the bad use cases may be higher than the good use cases, but there are good use cases, and the "losing our voice to LLMs" isn't a whole view of it, sorry.
If you really have no metrics to hit (not even the internal craving for likes), then it doesn't make much sense to outsource writing to LLMs.
But yes, it's sad to see that your original stuff is lost in the sea of slop.
Sadly, as long as there will be money in publishing, this will keep happening.
Even before LLMs, if you wanted to be a big content creator on YouTube, Instagram, tiktok..., you better fall in line and produce content with the target aesthetic. Otherwise good luck.
Predictably, this has turned into a horror zone of AI written slop that all sounds the same, with section titles with “clever” checkbox icons, and giant paragraphs that I will never read.
And that too is an expression of their own agency. #Laissez-faire
We've proved we can sort of value it, through supporting sustainability/environmental practices, or at least _pretending to_.
I just wonder, what will be the "Carbon credits" of the AI era. In my mind a dystopian scheme of AI-driven companies buying "Human credits" from companies that pay humans to do things.
I'd love to see an actual study of people who think they're proficient at detecting this stuff. I suspect that they're far less capable of spotting these things than they convince themselves they are.
Everything is AI. LLMs. Bots. NPCs. Over the past few months I've seen demonstrably real videos posted to sites like Reddit, and the top post is someone declaring that it is obviously AI, they can't believe how stupid everyone is to fall for it, etc. It's like people default assume the worst lest they be caught out as suckers.