Doesn’t match any of our internal product design, adds tons of extraneous features. When I brought this up with said PM they basically responded that these inaccuracies should just be brought up in the sprint review and “partnering” with the engineering team. AI etiquette is something we’ll all have to learn in the coming years.
Cue a similar joke about salary negotiation, and the annual dance around goals and performance indicators. Is it really programmers who should be afraid to become redundant, when you think about it?
I should know better than making jokes about reality. It has already one-upped me too many times.
The second problem was always going to be there, even with human written tickets, but the problem really is that someone who relies on AI gets into the habit of treating the LLM as a more trustworthy colleague than anybody on the team, and mistakes start slipping in.
This is equally problematic for the engineers using AI to implement the features because they are no longer learning the quirks of the codebase and they are very quickly putting a hard ceiling on their career growth by virtue of not working with the team, not communicating that well, and not learning.
Apparently, asking "why it doesn't make any sense" wasn't !polite~
If I remember correctly, she came up with ~200 questions for a 2-paged ticket. I helped write some of them, because for parts of the word salad you had to come up with the meaning first and then question the meaning.
You know what happened after she presented it? Ticket got rewritten as a job requirement, and now they seeking some poor sod to make it make sense lol
One had to be very unqualified to even get through the interview for that job without asking questions about the job, I feel. Truly, an AI-generated job for anyone who is new to the field
Talk about an AI induced productivity increase ...
Anyway.
People are starting to log support tickets using Copilot. It's easily recognisable, and they just fire a Copilot-generated email into the Helldesk, which then means I have to pick through six paragraphs of scrool to find basic things like what's actually wrong and where. Apparently this is a great improvement for everyone over tickets that just say "John MacDonald's phone is crackling, extension number 2345" because that's somehow not informative enough for me to conf up a new one and throw it at the van driver to take to site next time he's passing, and then bring the broken one back for me to repair or scrap.
Progress, eh?
I also do the extra step of eliminating things that are not needed, or we review this during backlog refinement.
Currently it's a bit of a wild west, but eventually we'll need to figure out the correct set of rules of how to use AI.
Funny how that works out.
The ticket was given to an LLM, the code written. Luckily the engineer working on it noticed the discrepancy at some point and was able to call it out.
Scrutinizing specs is always needed, no matter what.
We’re getting prospective and existing clients emailing us what look like AI generated spreadsheets with features that are miles long that they want us to respond to. Like thousands of lines. And a lot of features that are “what does that even mean??”
We get on a call with them and they don’t even know what is on the spreadsheet or what it means…
Very much a “So you want us to make Facebook?” (Not actually asking for Facebook) feeling.
I fear these horror shows of spreadsheets are just AI fever dreams….
AI makes it worse. This is where people will lose tons of productivity with AI and many people are completely clueless. It'll hit them like a ton of bricks one day.
I had a PM that was unable to work without AI. Everything he did had to include AI somehow.
His magnum opus was 30 extremely large tickets that had the exact same text minus two or three places with slight variations. He wanted us to create 30 website pages with the content.
The ticket went into details such as using a CDN, following the current design, writing a scalable backend, test coverage, about 3-4 pages per ticket, plus VERY DETAILED instructions for the QA. Yep: all in the same task.
In the end it was just about adding each of the 30 items to an array.
I don’t know if he knows, but in the end it was this specific AI slop that got him fired.
Everyone wants to hand off the real work to someone else.
Has been for a long time unfortunately. AI didn't create this behaviour but certainly made it easier for the other side to do it.
> Review the slop with the person that submitted it.
Alternatively, mark them as "Needs Work" if you can. But yes, put the ball in their court by peppering them with questions. Maybe they will get the hint.
I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.
And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.
Isn't it obvious? If I'd wanted to see AI response to my question, I'd ask it myself (maybe I already did). If I'm asking humans, I want to see human responses. I eat fast-food sometimes, but if I was served a Big Mac at a sit down restaurant I'd be properly upset.
I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y"
Because it's probably not actually about the content but the sense of connection. People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention.
And if that's what people are seeking, slack and social media are probably not the platforms for it (and, arguably, never were).
In particular, I've been thinking a lot about educational content, and what I'd love to ask educational providers for is not AI-generated content, but rather carefully human-built curricula offered in a structured manner, which my own AI could then use to create dynamic content for me.
Reading AI generated prose, even if it’s my prompt, always gives me the same feeling as when I read a LinkedIn post: Like a simple concept was stretched into an unnecessarily long, formulaic format to trick the reader into thinking it was more than it was.
Everyone taking their scraps of thoughts and putting them into an LLM likes it because the output agrees with them. It’s flattering. But other people don’t like it because we have to read walls of text to absorb what should have been a couple of their scattered bullet points.
Just give me the bullet points. Don’t run it through the LLM expander. That just wastes my time.
The problem is that getting an AI to answer a question is trivial. If I wanted to know what an AI has to say about the topic, I would just ask myself. Sending AI output has, as the author writes, the same connotation as sending a LMGTFY link. It does not provide me any value at all, I know how to write a question to an AI, just as I know how to use Google.
Did you even read the article? It is about person to person interactions. The three examples weer:
* Someone butting in to an ongoing discussion with a solution (but it's generic and misfitting AIslop)
* Someone being asked for their expertise and responding (but it's generic and misfitting AIslop)
* Someone comes with a problem thesis looking for help (but it's generic and misfitting AIslop)
The only one of these that existed prior to AI was the middle one, and the article very specifically calls out how transparent it used to be, because it had the shape of a google link.
The first one would be impossible because the person would have to either write an unhelpful response, and they wouldn't find the words at length. You could ignore them or pick it apart easily. The last one would be impossible unless if they were copy pasting from a large PDF, which would look nothing like a chat message.
What kind of workplace hellscape do you work on where people posting low effort bait on SLACK was the norm? The premise of this reply is entirely non-sensical.
Which is irrelevant. TFA is talking about personal communication (and the examples are from a business setting).
And their concern is not the mere quality or lack thereof, but also its origin, and this is something new.
>I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
No, many of us hate "our AI" content too, and wouldn't impose it to other people, same way we wouldn't fling shit at them.
Well, cat videos make people happy.
> The internet was not a bastion of high quality content or discourse pre-AI.
I have read thousands upon thousands of pages of AI-related discourse, watched hundreds of videos since 2022, maybe even a thousand now on it. NEVER at any point in time did people opine for the "high quality" internet of before. They opined for the imperfect HUMAN internet of before. We are now seeing once pristine, curated corners of the internet being infected with sloppypasta.
This is quite a broad brush to paint the internet with. It's like saying The Earth is not a bastion of warzones/peaceful places to live. That is HIGHLY dependent on location.
To "opine" is to give an opinion on something.
To "pine" for something is to wish for it, usually in a nostalgic sense.
I get how the two are related and can be confused, especially when you're talking about comments on the web. Just thought I'd clarify.
If that's what you're pining for, you're going to have to find a highly protected part of the internet that is walled off from untrusted actors. However, that's always been the solution, and AI doesn't change that.
Right now it's as if everyone started wearing digital face masks that replaced their facial expressions with "better" ones. Sure, maybe everyone's faces weren't perfect before, but their expressions contained useful information.
The problem is the same as it has always been. Figure out how to use your time and attention effectively,
Somehow nobody that replied to you mentioned this. The issue is reciprocity. If I spend two hours manually researching and using my expertise to reply to a ticket, then 10 minutes later I get a novel-length AI reply in response...I now have no respect for the person replying with AI, because they can't even be bothered to spend a few minutes and summarize their "findings" and I suspect they didn't even read what their AI wrote. Especially in a professional setting, where you were hired for your (supposed) skillset, not your prompting skills.
If I'm sending out AI content, then sure, give me AI content in return.
I honestly don't mind someone else's AI as long as I can trust it/them. One problem I have with sloppypasta specifically is that it reads as raw LLM output and the user isn't transparent about how they worked with the AI or what they verified. "ChatGPT says" isn't enough; for me to avoid inheriting a verification burden, I'd also need to understand what they were prompting for, if they iterated with the AI, and if/what/how they validated.
(the other problem is that dumping a multi-paragraph response in the midst of a chat thread is just obnoxious, but that's true even if its artisanal human-written text)
(n)amow(?): (not) All my own work ?
I ignore it. But if that isn’t an option, this sort of writing can help you convince someone in power around you it’s okay to ignore it.
Because they are. It would be like if I bought some trinket off aliexpress and told you I made it by hand just for you. You wouldn't mind if you bought it yourself, but the fact that I lied about it to make it seem like I care is deceptive and immoral.
Sending someone AI generated text without disclosing so is incredibly offensive. It says you don't care about wasting the receivers time and don't care about honesty either.
I'm curious what will happen once AI generated text gets good enough that people can't tell the difference. Will we just assume everything is AI and remain suspicious, or will we stop caring?
My hunch is we'll all retreat to places of the internet where we can feel sure we're talking to real people and there are chains of trust. For example, I spend most of my time on discord servers where people are real life friends or friends of friends, and increasingly assume "public" internet to be AI default, and therefore use our own AI to browse and summarize for us.
I wasn't expecting it to be so controversial. Reading and responding to many of the replies, I think many people are strawmanning me as being in support of AI slop.
We are getting to this weird situation where instead of Alice sending a message to Bob, Alice sends the message to her AI, which sends it to Bob's AI, which then tries to recover Alice's original message.
To be fair, I don't think it is an AI problem, more of a quirk of formal communication, the same happen with human secretaries. For example, I want my customer to pay me, I want to be professional but not bother with the details, so I ask my secretary to write a well written letter to my customer, with a proper bill and all that stuff, my customer's secretary will then read the letter and tell his boss "hey, our supplier wants $xxx". I could have just called the boss directly and say "hey, it is $xxx", but it is rarely how it is done. Here, it is AI that is taking charge of the formalism, and I find it to work really well for this, as it is essentially a translation task, what LLMs do best.
I am not discounting human secretaries here, they can do much more than write formal letters, but that's a part of their job where LLMs excel at.
Obviously you're not a golfer. Human sectretaries don't have non-deterministic hallucinations and random critical ommissions in their summaries, which I've witnessed first hand with LLMs. More importantly, if they do you have more deterministic mitigations with them than you do with LLMs, as there are no mitigations with LLMs except praying that a new model in some unspecified future will be magically better with the summaries down the line.
The only way to stay sane when using these tools is to pretend that these things won't ever happen and just go about your business like the rest of the zombie workforce, because no one wants to stop the train and address the issue.
There is a reason why the title of Dr.Strangelove is "How I Learned to Stop Worrying and Love the Bomb".
I just wanted to point out the seemingly ridiculous idea of formal communication that none of the interested parties actually read. It is as if two English speakers insisted for the discussion to be in Spanish, and they both brought an interpreter for that.
Somebody's going to revolutionize how we use AI by creating an algorithm where the output of an AI is modified so that it has fewer tokens but results in a similar-enough input for another AI to respond to. Like dropping purely syntactic words or using small synonyms. It'll be a bizarre shorthand that makes no sense to humans, riddled with artifacts of the model.
Or we're going to realize that creating output with one AI for another to ingest is needlessly breaking a session into different pieces. But that would mean a lot of professional emailers getting optimized out.
The funny thing is that I know my manager got this “working” within a week with Claude. I had to spend 2 weeks with 4 JIRA tasks, many commits for toy examples, and three reports.
Velocity is everything.
For me it destroyed company as aligned group of people, at C level, it's just bazaar of drones throwing AI slow at each other.
I don't think it changed much from before LLMs. it's just that the slop is now automated.
This is one role that I can't tell if it's completely useless in an AI powered world, or if that's basically what we all end up doing, reviewing and commenting on the work versus actually making it.
However, the essay and the guidelines were all human-written!
So trace* through ninerealmlabs and ahgraber and sure enough:
I used AI:
- to help build this website.
- to help generate examples of sloppypasta
based on my original guidance
- to proofread and review the human-written
copy to provide a critical review
- to improve my arguments and ensure clarity.
Kudos for being forthright.---
* Turns out clicking "Open Source" bottom right gets there faster!
Authentic human brutalism =)
I'm possibly too jaded / cynical already...
They did disclose AI usage which is good: https://github.com/ahgraber/stopsloppypasta?tab=readme-ov-fi...
That, would give the responder the chance to modify the prompt and get a perhaps better answer from the LLM?
That results in a shorter and more concise message, and the original sender can choose to use the prompt you provided on their favourite LLM from the start.
AI, and different AIs, give different answers to the same question, so it may be useful if you can provide a good summary of the different responses you got.
How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?
I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.
The most interesting incident for me is having someone take our Discourse thread, paste it into AI to validate their feelings being hurt (I took a follow up prompt to go full sycophancy), and then posting the response back that lambasted me. The mods handled that one before I was aware, but I then did the same thing, giving different prompts, and never sharing the output. It was an intriguing experience and exploration. I've since been even more mindful of my writing, sometimes using similar prompts to adjust my tone or call me out. I still write the first pass myself, rarely relying on AI for editing.
The person being targeted just prompted the same AI with "Which user has thin skin" and instantly the AI turn on the other person. Then the moderators got involved and told the first guy to stop using AI as a genital pleaser.
Make them realise they're replacing themselves if they continue down that path. "What value do you have if you're just acting as a pipe to the AI?"
Embrace the tension. Tension is human.
The other person already demonstrated a lack of professionalism by sharing unverified AI slop so, in case of conflict, I wouldn't be surprised if they continued acting unprofessionally by spreading false rumors, unnecessarily escalating the situation to higher ups, secretly sabotaging the project, etc.
That's the asymmetry of the problem: Writing with AI delegates the thinking to the reader as well as all the risk for correcting it.
I've found success having sidebar conversations with the colleague (e.g., not in the main public thread where they pasted slop), explaining why it was disruptive and suggesting how they might alter their behavior. It may also be useful to see if you can propose or contribute to a broader policy on appropriate AI use/contribution with AI, and leverage that policy as the conversation justification?
Pattern rather than person? General team reviews or the like. As long as it's not tech leadership pressing for it..
You don’t. You keep these arguments handy for ignoring their output until it’s germane.
https://github.com/ocaml/ocaml/pull/14369#issuecomment-35573...
sloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.
The author wastes time talking about this case, and even does it first before talking about the much worse case:
>"The sender shares AI output as their own work, with no indication a chatbot wrote it."
This is 100 times worse, and is objective rather than subjective. If the author admits it's AI when confronted it kills their reputation, (if they don't admit it and turns out it is AI, it's fraud, fireable offense)
Putting these 2 categories of AI use wastes breath and conflates the two, the message will not be clear at all.
What's worse, such a policy actually has the effect of increasing undisclosed AI use. This is a specific case of the general case: banning all AI usage increases unregulated AI usage. Everyone who prohibited employees from using AI in 2024 knows that what you get is undisclosed AI use or content you are not sure is AI written or not. If you give a specific way to use AI, you can add features like auditability, supply chain control, and you can remove any outs from employees and users that do not comply with the policy.
People who previously couldn't put in the effort or quality, are now vomiting tons of slop I'm meant to read and review.
PRs descriptions. Documentation. Plans. Etc.
Walls of sprawling text, "relevant files", linked references, unhelpful factoids, subtle inconsistencies and incoherencies.
It's oppressive like 95% humidity on a warm day.
If you tell me that an LLM wrote it, I will stop reading because I assume the only reason you'd tell me that is YOU want to hedge your bets about it being wrong, so it's clearly not worth my time if you don't even believe it.
However, if I don't know, then I will take it at face value. People can get LLMs to output sensible text and facts, so I expect the implementation detail of "used an LLM" to be hidden from me. If you can't do that, I will think of you as a low effort writer that sprinkles in sloppy rhetorical devices and lists of nothing into your massive multi-paragraph messages, but I'll assume that's YOUR taste. If something happens to be wrong in it, it's because YOU didn't know it was wrong.
It's very frustrating, they delegate the responsibility of having written wrong information or code away to the bot, as if there was nothing they could do.
They are stealing our work, turning it into a model, and then renting our decisions to less intelligent people.
They (tech companies) don’t want us to be smart any more. They are commodifying intelligence.
I'm starting to be reminded of Neal Stephenson's "Diamond Age". He described a future in which people walked around with a nearly invisible defensive army of nanobots surrounding them whose job it was to counter the offensive nanobot swarms of their enemies. Characters in this novel would go about their business while an unseen nanobot war took place in the air around them.
We're rapidly reaching the point where we will need AI to defend us from AI. i.e. We will soon need agents filtering all that we read and removing slop, just so we can preserve our time and attention for things that are human and real.
It includes 4 follow up actions and I automate check in messages to see how they are progressing with them.
Another reason is if I’m using a custom setup with different knowledge bases and tool, I don’t want to respond “setup X, then ask AI”
I think this should be flipped around and instead be “stop sloppy asking”
I feel like we just need the equivalent of "Let me Google That" for LLMs.
Caveat that I'm talking about when it's something that's in depth or requires some nuance beyond just a surface level information. They don't know themselves, so rather than saying "oh, sorry I don't know, this document or person might help."
As an example I might post in our teams chat, that I've seen an issue on our physical hardware, there is a step change in the telemetry (increased vibration or some other erratic behavior on a thermocouple). Has anyone had experience with what can cause this on this type of installation.
Then you get a Copilot paste of the prompt "what causes high vibration on rotating machinery".
If somebody clearly asks a question that could just be answered by Google or a LLM prompt quickly then fair enough, but I'm after specific product knowledge from our technical team. In self reflection I might be very specific in my question so other members of the team can't go down that route as easily.
It is easy to do in social media because the context is global but in enterprises it is a bit harder.
Something like "flagged as very likely untrue by AI" is something I would really appreciate.
I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high.
It's also very impolite to dump 5 pages of text on someone, because now you're asking _them_ to validate it.
When I ask a question in Slack I want people's input. Part of my work is also consulting the GPTs and see if the information makes sense.
And it shows up the most with people who answer questions in domains they're not a 100% familiar with.
What bullshit essentially misrepresents is neither the state of affairs to which it refers nor the beliefs of the speaker concerning that state of affairs. Those are what lies misrepresent, by virtue of being false. Since bullshit need not be false, it differs from lies in its misrepresentational intent. The bullshitter may not deceive us, or even intend to do so, either about the facts or about what he takes the facts to be. What he does necessarily attempt to deceive us about is his enterprise. His only indispensably distinctive characteristic is that in a certain way he misrepresents what he is up to.
Also related - Gish-gallop During a typical Gish gallop, the galloper confronts an opponent with a rapid series of specious arguments, half-truths, misrepresentations, and outright lies, making it impossible for the opponent to refute all of them within the format of the debate.[2] Each point raised by the Gish galloper takes considerably longer to refute than to assert. The technique wastes an opponent's time and may cast doubt on the opponent's debating ability for an audience unfamiliar with the technique, especially if no independent fact-checking is involved, or if the audience has limited knowledge of the topics.[3]The Wikipedia page has some good counter-strategies: https://en.wikipedia.org/wiki/Gish_gallop
I am really surprised with the amount of backlash at this site for using llm helpers in writing. There are many ways in which this can go wrong - and the article lists some of them - but it does not blindly close all llm writing helpers.
What would be even more constructive would be an article listing the good ways of using llms.
- now the fun part: which AI did I use to write the above?
I wish there was a remedy. I block or mute the person when I can.
In such a case, I think that spending extra time to scrub the output of em-dashes or other slop is just virtue signaling that “I am man, not machine”
Irrelevant. As soon as the ctrl-c moment happened, all of the real effort became worthless and utterly overwhelmed by the slop in the face. I do not read LLM responses. I will stop reading any text as soon as it becomes clear it was LLM generated. If it is in a setting where I have the power to do something about it, it will be swiftly followed by a flag/ban/block/ignore/comment deletion…
I'm 55 years old. "slop" is way older than your examples. Try a dictionary, eg: https://dictionary.cambridge.org/dictionary/english/slop
LLMs are tools. For me (wot had a C64 as one of my first computers) they are seriously close to magic but I understand what a "next token guesser" means.
Instead, they'll use an LLM to send a slop response back.
Instant karma!
How?
Admittedly, the paragraph is somewhat confusingly written. Also probably written by an LLM.
As for "Stop Sloppypasta", it doesn't feel like the content is AI-generated to me but it feels like the presentation of it is. I don't know whether that changes my opinion of the whole thing or just the presentation. As for the advice in it, it seems good, but it also seems a little bit brittle, because people can use an LLM session to review things generated in a different LLM session before sending with some success, and this will increase and therefore it's a moving target.
this seems reasonable to me, especially in this transition period where we're navigating ethical and respectful collaboration that involves AI. give people a little grace in this weird new world.