Every autocorrect or auto suggestion ever has felt like this to me, but the volume has been turned up to 11. The otherwise drab Adobe Reader is covered with colorful sparkly buttons and popups suggesting I need not even read the document because it can give me “insights.” First, no you may not ready my proprietary document, nor do I suspect most people using this particular software - I only have it for digital signatures - have permission to share IP with a third party. But mostly, it can sometimes be a useful tool, and the fact everyone is shoving it in my face reeks of desperation.
The tech industry is in real trouble.
It is unclear what the industry thinks has changed, that people will now welcome "It looks like you're [whatever]".
This forum (HN) attracts certain population that wants to do things, to understand, to share relatively well based opinions and have a discussion.
But look around, look at the new hires in the other departments. And by new I mean young, in their 20. A lot of them welcome this kind of things, they evaluate by popularity and likes. The marketing begin the AI bubble knows this, and so it pushes for it. Make it popular is more important than make it useful, because there is a tipping point were is popular enough that we capitulate.
Turns out that Idiocracy is not that far behind (https://www.imdb.com/title/tt0387808/)
So it won't change, because there is no lesson to learn. No individual involved acted irrationally.
Which approach gives companies more control over users? Which one allows companies to sell that access to the highest bidder?
> It is unclear what the industry thinks has changed
The demographics of computer (and other device) use have changed massively since the late 90s, and the suggestion engines are much more powerful.
I still want it all to take a long walk off a short peer, but a lot of people seem happy with it bothering them.
Maybe it isn’t the tech industry, and just consumer-facing apps.
On some level it's enormously baffling that this was the thing they decided they needed to do...conversely Adobe Reader on my phone won't shutup about liquid mode either (which uploads to Adobe servers) and Microsoft and Google's solution to "people don't want to use our AI assistants" was to ensure they literally can't be disabled or removed.
This is a massive liability that almost everybody seems to be ignoring. My employer has a ban on using AI on IP until this is properly resolved, because we actually care about it leaking.
Maybe an Information Commissioner will get round to issuing a directive some time in the mid-2030s about how none of this complies with GDPR.
Yet I can almost guarantee you that someone has put something they shouldn't through ChatGPT, because they either feel like it's a dumb rule, that should not apply to them, or they where in a hurry and what are the odds of them getting caught.
- if everyone is doing it, you can't really fault anyone
- on some level we are, or will be, kinda dependent on that AI and opting out will probably be made unpleasant via dark patterns as usual
- no pushback to every piece of software, including at the operating system level, slurping all the keystrokes and data, let alone the data that's already in the cloud - big tech knows everything about us but to my surprise no major public leak has happened, i.e. one where you really can see your neighbor's private data without buying leaked data from someone on the dark web or wherever
- things are moving too fast, and you don't know if you can afford to have your programmers not use tomorrow's AI, for example, so your "bans" will have to be soft etc., this limits the potential pushback and outrage
- apologizes
- implies the recipient was "promised" this email as a "response" to something
- blames a hectic schedule
- invites questions
None of this was in or was even implied in the original. This is not a "polished" version, it's just a straight-up different email. I thought that style transfer while maintaining meaning was one of the few things LLMs can be good at, but this example fails even that low bar.
AI exists in a Matrix where toxic positivity is enforced with electric shocks.
(tyrna be funny not patronizing. but the machinery of subjectivity production is ofc very real)
It's no surprise LLMs are using corp speak and vapid marketing prose as a template. There is so much of it out there.
This is from that Autodesk post last week where they admitted their mistake and… Nope it's corp speak:
“We are excited to share some important updates regarding Archiving and our Idea Boards and Forums that aim to enhance your experience and ensure valuable content remains accessible. Please read the details below to understand how these changes might impact you.”
Barf. But to an LLM this looks like a human communicating in a meaningful way.
This person cares about not putting up a fake identity. That's pretty cool, but social media has exposed that a large number of people are perfectly fine presenting an illusion. People will have no shame passing off well written things as an output of their talent and hard work. Digital makeup has no bounds.
Sometimes I don't want to waste my time crafting a professional e-mail to a bunch of jerks full of themselves. Maybe I want to write it as it comes off my brain, and let my digital scribe to reformulate it so that the people reading it feel respected/validated/flattered. Am I putting up a fake identity then? Am I presenting an illusion of professionalism? Maybe writing "Best regards" instead of "Bye" is the facade of professionalism in the first place.
When you did it manually you were putting up a fake identify. ofc using an AI to fake you being fake for work would be fake.
The idea that our work personas aren't at least a little fake is toxic. Depending on where you work it might be a lot fake.
Wear your character as lightly as a cap, don't get tricked into method acting.
The "bunch of jerks full of themselves" likely aren't reading the emails now; we're burning immense amounts of energy for your politeness to be generated, and distilled out at the other end into a no-nonsense summary missing all the niceties another AI just added.
Many people didn't think about what they are trying to convey (or self-analysed how they present themselves) when drafting correspondence in the past; now, many people think just as not-hard and often continue, like before, to neglect to meaningfully proofread whatever they had the LLMs generate for them before hitting Send.
Of course, I don't like it. But in some ways, it's just not a whole lot different from what it was before in that you can often still tell apart the people who care to be articulate from those who don't. Though, I feel bad for people disproportionately waylaid by the new paradigm like the bug/security responders on the curl project.
At a high level I see convergence of styles, topics, behaviors to a generic form, both in "AI" and social media. Which to me suggest that the "AI" solutions are doing exactly what we would do ourselves, just faster.
I'm a manager, so when I'm sending emails to a customer or talking with one of my reports, I care deeply - so you might get some overwrought florid prose, but it's my overwrought florid prose.
On the other hand, I have to lead a weekly meeting that exists solely to provide evidence for compliance reasons, something out of the CIA's sabotage field manual that David Graeber has probably written about. But is now a thirty second exercise in uploading a transcript to ChatGPT, prompting for three evidentiary bulletpoints, and pasting the output in a wiki no human will ever read.
Probably related to why I can copy a piece of code from elsewhere (with sufficient work to verify it does what I expect and only what I expect) but I don't copy a quote and use it as my own. My words are my words. My code doesn't have the same guarantee.
Writing uses the entire breadth of human language to convey information between human beings with unique and complex understandings of the universe. If those words come from a machine that is not you - that is not someone - you ought to disclose it.
If you use an AI to communicate, then you either fail to mimic those group membership signals and you look like an idiot. Or you succeed and show that a machine can fool humans at this game. Any grifter can come along and establish trust in a group by relying on this tech. This dance that humans have been doing since the dawn of time suddenly breaks down, and that doesn't feel good.
No doubt the recipient will also be using some form of AI summarization that strips away all that added "polish" - making the whole exercise entirely redundant!
It just feels absurd.
The more the first pads, the more the second is needed.
If AI really were Intelligent, I'd fear it's an organism making sure it's needed in the ecosystem.
Not entirely, there’s still the energy usage and stock price increases. All because everyone’s too anxious to just talk to each other directly.
So no, thank you. Correct my textbook punctuation mistakes, and leave my wordy and "not positive enough" sentences to me.
Though, its stenography is a bit more obvious given the "you've got to be able to read it".
The kindle version of the book starts with https://imgur.com/uIBjwlQ
This would give you the opportunity to have another ending to the book.
>You have an individual and unique way of speaking and writing? You're going to wish your e-mail finds people well, corporate-monkey.
It feels like the whole world is turning into an HR department premised on the ideological axiom that killing one man is a murder but killing a million is a statistic.
So is a landmine.
One time, I couldn't remember the name of the album I wanted SIRI to play, so it kept picking the wrong one. After the fifth or so time, I told Siri, "you're killing me Siri" and it called 911.
The future sucks.
What did you expect the machine to understand subtleties and responding like an informal friend?
I'm fine with AI being 100% serious
'You are a professional proofreader and editor. Your task is to rewrite this dictation with more casual capitalization and punctuation so it can be used in a chat setting.
- Always capitalize "I" and derived words like "I'm"
- Sentences should start with lower case letters unless they start with “I”
- Replace explicit punctuation callouts with the actual punctuation
- No period at the end of the message, but leave periods between sentences
- Do not remove commas
- Do not change sophisticated words to more common ones
Return only the result. Do not add any explanations or comments about your edit.'
Yes yes, languages and translations are a potentially valid use case, but irrelevant for the vast majority of people I communicate with.
Even in stuffy institutions like banks and insurance companies, corporate email style is very straightforward and, idk, undecorated? You learn pretty quick that people only skim emails and every word beyond your core message creates opportunity for your core message to be skipped over. NOBODY is writing salutations or 'i hope this finds you well' or whatever other garbage chatgpt thinks the first paragraph should be.
When I read historical items on lettersofnote I find that even super old business messages are pretty streamlined and only have some crunch because some runner has to physically find the dude you are writing to.
"The idea of a Soul, Free Will -- these are Over!" - Technocracy explained by Prof Yuval Harari https://youtu.be/NV0CtZga7qM
(People tend to forget it, and it's only seen in passing, but the Ministry of Truth has machines for writing terrible books in the novel!)
Everything is a product.
Every choice is a product purchasing choice.
We ourselves are products.
And endless sea of products selecting from other products.
Only about 5% of the world’s population are native English speakers, but more than twice that number use it daily. For many, AI rewriting isn’t about losing personal style—it’s about making sure they can communicate clearly in a language that isn’t their first.
It’s fair to dislike how AI flattens unique voices, but for a huge part of the world, it’s not erasing personality—it’s making participation possible.
When I'm communicating with a non-native speaker, I intentionally use shorter / easier to translate words and sentences, and I give them more leeway with word usage / don't expect them to use the right words all the time. And that's fine! Communication still happens! We manage!
But if a non-native speaker starts running their text through an AI it makes communication harder, not easier. I can't tell if their word choice is intentional or if the AI did it. A tiny mistake I can understand gets expanded into multiple incorrect sentences.
Absolutely this. "Accessibility" and "participation" are great goals on paper, but the tools at hand are likely to introduce confusion because the user fundamentally isn't in a position to judge the quality of the output.
Lack of comprehension on what other people said was a big issue. But also having four incomprehensible paragraphs thrown at me for what could be six words (not infrequently based on a misunderstanding of a very basic simple sentence).
I'm not a native speaker either, but the only way to learn a language is to actually use it. For better or worse, English is the modern world's lingua franca.
It's how well you know english that's correlated to being able to write well in english instead.
it just makes them better at copy and pasting
For published work, if it's not worth editing then it's not worth reading (I would go further personally and say that most publish, edited and peer reviewed work, in your area of interest isn't worth reading anyways)
For unpublished work, like an email, ask the AI to translate the passage while maintaining style and tone. It will still flatten it, but not as much as the complete dogshit I read in the article.
Communication is a job requirement, faking it with AI is going to go about as well as someone faking programming skills. Not very!
These are things that I _used_ to simply ask a search engine, before Google results became 99% SEO-optimized blogspam and therefore useless for actual knowledge-seeking.
I think an important part of overcoming AI skepticism is to understand (at a very high level) how it all works so that you understand its limitations and know when you can and cannot trust it.
It does seem, to me, that the benefits are there, but not to the extend that the AI companies would have me believe.
I like doing the things that AI is supposedly good at. I like learning, I like understanding what I am doing, I like the satisfaction of finally getting to a solution after banging my head against the wall for hours. I like the occasional sensation of being hopelessly lost and finding my way to the light. I like writing text and code. I'm not terribly bored by boilerplate. I like going down rabbit holes and experiencing happy accidents. I can't in good conscience sign my name to something that I did not create and do not understand deeply enough to explain to anybody who asks. I like doing the hard thing.
The immediate response I'm sure that I will get is some variant of "well everybody else is using it as a force multiplier and the way you do it is making the job take too long." And maybe that's true. Maybe I don't care. I am a person who takes pride in my craft. I enjoy the act of making things. Some people don't, I guess.
The day may come where I am viewed as a dinosaur, where the way I work becomes fundamentally incompatible with the way the industry works. And if that day comes, bluntly, fuck the industry. I'll go fix air conditioners instead.
I personally do not want you -- for any value of "you" -- to use an AI to "polish" any writing sent to ME. I want the author's actual thoughts, not their thoughts massaged by an LLM.
But I think it's a safe assumption that she can write well when she wants to.
It is actually one of the first things I used LLMs for, and I still am. Latest example, I received an email from an administration asking me to sign some document, but it contained some incorrect information. I used a LLM to write a nice letter telling them to fix it. I could have just replied "it is not A, it is B", but without some formalism, it sounds rude, and I didn't want to sound rude, I also didn't want to make efforts for that formalism. It may be one of the things LLMs do best, they are like supercharged model letters.
Of course I don't do that in informal situations or when I want some emotional connection. By the way, no LLM touched this comment, I don't consider posting on HN a formal situation.
EDIT: typoes, but maybe I should have let them in, to show that it wasn't the work of a LLM ;)
I think it's interesting how this has changed. If you go back and look at politically correct corpo-speak from decades past, it was bland and impersonal but aimed to be direct, clear, and reasonably concise.
These days, it's still impersonal but now couched in platitudes, apologies, and excessively polite language in an attempt to avoid possibly offending anyone in any conceivable fashion.
I'm worried that we'll get to a point in regular work communications where responding to a question with a simple and succinct "yes" instead of, "Great idea, I think that is an absolutely perfect plan and you surely deserve a call-out at the next all-hands meeting for coming up with the idea to fix that problem in the team's code!!!" will get you put on an HR watchlist for being difficult to get along with.
It is not at all like you say in the end, way more boring and that's the point. But out of curiosity, I asked ChatGPT for some appropriate formula for a use case like yours and I got "Sounds good to me! I accept your suggestion—let's move forward with it." Of course, there is some formalism here, in day-to-day communication with people you know, I found it to become less formal over the years. "ok" is fine now when it wasn't before. I think because test-based instant messaging is now commonplace.
If you analyze these systems as a work of art, using the postmodern toolset, the people and systems taking one of the most potent technologies humanity has ever created and using it to craft a banality machine is just... very revealing. Like the great-uncle who can't even finish protesting how non-racist he is without using a racial slur in the process.
"We're innovative! We're hip! We're on the cutting edge! We're setting trends! Now here, let me help you turn your text into the grayest corporate sludge imaginable."
It's art. It's probably not art they intended to make, but it's art.
It is, isn't it. It's the Sokal Hoax on a society-wide automated scale: rather than being concerned with what a "text" might "mean", generate millions upon millions of "text" sequences and mechanically separate those which pass as real enough. Baudrillard's simulacrum.
> It's art. It's probably not art they intended to make, but it's art.
People seem to be very insistent that the output of AI is not capital-A art, because that threatens their worldview, ignoring how artists had previously pushed to expand "art" away from any concerns of technique, intent, legibility etc.
It’s already severely frustrating that at many companies, you have to talk to several people (separated by copious waiting) to get your problem solved after handling a severely outdated and underdeveloped phone system.
Our society is very openly discriminatory against people who aren't able to produce written communication that fits a huge number of unwritten rules.
LLMs know those rules. Helping people with ESL better interact with a world that requires them is a huge win in my book.
So far the 21st century seems to be a mad dash to see who can turn off their brain the most.
> "why would I want to translate my email into Polish?"
But what about translation by professional human translators between languages? is it really any better at preserving the intent and sub-text?
Kundera has an amazing essay—- “The Sentence”—-in his collection of essays “Testaments Betrayed” on the woes of translation of, yes, a single sentence from Kafka’s “The Castle”.
As he (or his translator Linda Asher) say: The sentence “describes the coition of K. and Frieda”.
Under the bar too ;-) Not a topic I tend to associate with Franz Kafka.
When AI systems can match Milan Kundera (translated from the French) then we will have definitely gotten to Pro level. “Write me a brilliant essay in the style of xxxx, on topic yyyy in French, and translate it into Mandarin for grins. I would be a wreck if I had to translate Kundera.
After I posted I immediately regretted it. It sounded too structured and should have been more whine-y. Weird, but it would have been more like-able.
15 years ago I might have imagined that an email sent by a robot would be extremely efficient, including only the relevant information, in order of importance. In 2025 it's apparently the opposite; we're trying our best to communicate effectively and an AI is nagging us to include an extra paragraph of flirtation.
I think a bunch of Google's Gemini ads highlighted how clueless they were on this subject. Like one of their very first Gemini announcements highlighted how you can get AI to write a caption for your social media photos of your dog. God forbid you actually use your own brain to write an actual human caption (flaws and all) for photos of your dog. Google also highlighted how their Pixel camera AI features can "correct" photos of your kids making funny faces in family photos, because how awful would it be if you didn't look like some sort of Stepford family.
I was glad Google finally got a ton of pushback against their ad about using AI for a young girl to write a fan letter to her Olympic idol. Seriously, fuck off with that. Makes me wonder if these marketing folks are actual humans in the first place.
"AI take a letter and polish it for me..."
Maybe even video conferences will use "Polish" to portray a recreated lifelike sophisticated avatar with real time grammar correction. When folks finally do meet in real life, they discover a socially awkward, an illiterate or tactless boor instead of the suave intellect portrayed by the AI.
Please forgive my comment, I didn't have any polish for it. LOL
"AI take a letter and polish it up for me..."
Maybe even video conferences will use 'Polish' to portray a recreated lifelike sophisticated avatar with real time grammar correction. When folks finally do meet in real life, they will discover slobs, socially awkward or tactless idiots instead of the suave intellect portrayed by the AI. LOL
Person 2: AI, please take this email from Person 1 and convert it into bullet points
The second rule is if you insist, please let us turn off your dumb AI feature.
Oops, probably too late.
What if we just wrote articles well, such that (a) they were not filled with fluff, and (b) the summary would be to only read the first paragraph of the article?
> "Write a thank you note that sounds sincere to that ahole" > "Some deep musing on the meaning of life tied to b2c marketing for a LinkedIn post"
It worked surprisingly well a lot of the time! But most of the time it also kinda broke the model in terms of coherent answers because it was obviously trained for the exact opposite thing.
But why would anyone put AI generated images into an Email?
It was not idiomatic Polish - very far from it really, so I guess depending on the model used here, the result could have been equally disappointing if it was really trying to Polish that email.
Presumably, the idea is to make the other person think you've written the email yourself, and you're a "better"/more corporate/more professional writer than you actually are.
But once everyone starts using this kind of transformer, so nobody assumes that anyone else is actually writing the content they're sending out -- what's the point?! Who benefits?
The team that implemented the feature and hit their quarterly goal and got their full quarterly bonus. The execs who set the quarterly goals will be rewarded by wall street for shipping AI features. In 2 years, when reality has struck and the market has moved on from the AI fad, another team will receive their bonus for lowering COGS by removing this feature.
Users' needs are irrelevant to this cycle.
Big tech companies need to be broken up into teeny, tiny pieces.
The vendor's share price. This is nothing to do with the _users_; it is about cramming 'AI' into as many press releases as possible because it gives the markets the warm fuzzies. No-one wants to use this, and that's really beside the point; for the time being it makes stock price go up.
Presumably in a year or so there'll be a new fad, and most LLM-based 'features' will go the way of Clippy.
Phishers.
If you're already an effective writer, obviously AI will usually only make things worse.
But for people where English is not their first language (or whatever language you're writing in), polish is a godsend. It fixes your grammatical errors, vocabulary errors, spelling errors, punctuation errors, and all the rest.
Obviously it can go overboard in changing your meaning (as any flesh-and-blood editor also might), and presumably over time these tools will get more refined.
But for most people who write intelligent comments here on HN, you are not the target audience for this feature. Well, until you have to dust off your high-school French or Spanish or whatever for an e-mail exchange.
It's owners, however, definitely want you to fall into lockstep that cooking the Earth for their billion$ is perfectly fine and that waking up to a life of low-impact, serviceful compassion to the Earth and its peoples is just virtue-signalling when, in fact, it is simply virtuous in the face of demented psychosociopaths.
Note that this all started with all that digital coin-mining. Just more burning the Earth for fakeass money so callous bastards can get richer.
Always strikes me as interesting that "Nvidia" sounds like the syllables in the middle of "invidious"
And what happens when they run out? Back to carbon fossil fuels?
No. No, it's not. It doesn't matter how many fools do something, they're still fools. The root word of ignorance is "to ignore".
I put the following into chatGPT: "I am a developer for an email client. I want to incorporate some AI features to help users write more polished emails. I'm thinking I will have an option that appears under the email body where a user can click "Polish". Can you see any issues with this UX?"
ChatGPT: There are a few potential issues or points of confusion with a button labeled “Polish”:
Ambiguity:
Some users might confuse “Polish” (as in “to make shiny or more refined”) with “Polish” (the language). If your product is used internationally or by non-native English speakers, that label can be unintentionally confusing. Even among English speakers, “Polish” may not be immediately clear as a writing enhancement feature—people might wonder, “What exactly does it do?” Lack of Context:
If users don’t already know what the feature does, they won’t know what to expect when they click “Polish.” You may need a tooltip, hover text, or short description like “Improve your text with AI suggestions” so users understand it’s about refinement, grammar, tone, etc. User Control:
How much will it change their text? Will it rewrite in a new style or just fix grammar? If it makes heavier changes, users might worry about losing their personal style or voice. Be clear about the scope of the changes. Trust & Consent:
Users may have privacy concerns if their email content is being sent to an AI for processing. Even if that’s already in your privacy policy, the feature’s label (just “Polish”) doesn’t convey what happens behind the scenes. You might consider adding a “What will happen to my text?” link or a discreet explanation about how data is handled. Alternative Wording:
Using a label like “Improve,” “Refine,” “Enhance,” or “Polish Text” might provide clearer expectations. You might also consider dynamic or contextual labeling like “Improve Grammar & Clarity.” How to address these concerns:
Add clarity in the UI: A short descriptive text or tooltip under the “Polish” button can briefly explain what the feature does: “AI-powered text improvements.” Offer user choice: Present a preview of the changes before they commit. Let them accept or reject individual suggestions so they feel in control. Signal the feature’s AI: Some users will want to know that an AI is involved, especially if they’re concerned about style changes or privacy. Labeling it as “Polish with AI” or something similar can help set expectations. Overall, the idea of a quick “one-click to refine” feature is appealing, but make sure the button’s purpose and functionality are crystal clear—both to avoid confusion and to build user trust.