I appreciate Kagi's community-driven approach. The open Small Web list[0] is invaluable. Applying a smallweb filter[1] on HN brings a breath of fresh air to the frontpage.
The end result is that there's a lot of "small web" stuff that doesn't show up. Looking at my bookmarks, I think 90% of them are in the "small web" category in spirit, but maybe 10% have any chance of appearing on the Kagi list.
I wonder where the obstinacy on the part of certain CEOs come from. It's clear that although such content does have its fans (mostly grouped in communities), people at large just hate arificially-generated content. We had our moment, it was fun, it is no more, but these guys seem obsessed in promoting it.
Here are several examples of videos with 1 million views that people don't seem to realize are AI-generated:
* https://www.youtube.com/watch?v=vxvTjrsNtxA
* https://www.youtube.com/watch?v=KfDnMpuSYic
These videos do have some editing which I believe was done by human editors, but the scripts are written by GPT, the assets are all AI-generated illustrations, and the voice is AI-generated. (The fact that the Sleepless Historian channel is 100% AI generated becomes even more obvious if you look at the channel's early uploads, where you have a stiff 3D avatar sitting in a chair and delivering a 1-hour lecture in a single take while maintaining the same rigid posture.)
If you look at Reddit comment sections on large default subs, many of the top-voted posts are obviously composed by GPT. People post LLM-generated stories to the /r/fantasywriters subreddit and get praised for their "beautiful metaphors.
The revealed preference of many people is that they love AI-generated content, they are content to watch it on YouTube, upvote it on Reddit, or "like" it on Facebook. These people are not part of "the Midjourney community," they just see AI-generated content out in the wild and enjoy it.
Compare that Fall Of Civilizations (a fantastic podcast btw) that often has 7 months between videos.
That sleepless channel is one of an entire series of very similar channels with the same voice and same “style” of content. Some get lots of views, others not so much.
Honestly, eventually people will spot that shit stuff from a mile away. None of it is unique nor does it add any “entropy” as some other commenter here said.
I don't remember what channel but recently I have been into dexter and I have been watching a lot of dexter related content on youtube and I once think that I saw either down-right AI generated or very LLM-y style video / channel in general. Like, the way they speak etc. felt very AI generated imo.
Nobody questioned it in the comments.
I genuinely started wondering what is the point of AI generated content when people will find out its AI and then reject it or shame them etc. but I think that either I believed that humans in general would detect it more often or maybe the fact that people would start using AI in very sneaky ways maybe to not be labelled AI slop while still being very AI assisted.
I don't have problem with AI assistance but I just feel this hate when an AI generated voice speaks AI generated text which I recognize due to the patterns like
"It isn't just X, Its y" and the countless others examples.
I can tell you: their board, mostly. Few of whom ever used LLMs seriousl. But they react to wall street and that signal was clear in the last few years
https://github.com/kagisearch/kite-public/issues/97
LLMs just make too much economic sense to be ignored.
on Instagram AI content is highly popular, some videos have 50mil views and half a million likes
I don't really care if people produce this sort of crap; let the market sort it out, maybe something of value will come of it. It's the fact that, as Kagi points out, it's getting more and more difficult to produce anything of value because content creators operating in good faith with good intentions get drowned out by slop peddlers who have no such limitations or morals.
In your social circles.
and saying 'it is no more'... sigh. such a weird take. the world's coming for you
Consider the "will smith eating spaghetti test", if you compare the entropy (not similarity) between that and will smith actually eating spaghetti, I naively expect the main difference would be entropy. when we say something looks "real" I think we're just talking about our expectation of entropy for that scene. An LLM can detect that it is a person eating a spaghetti see what the entropy is compared to the entropy it expects for the scene based on its training. In other words, train a model with specific entropy measurements along side actual training data.
I think (currently) the problems are more about text, or post processing of other media to hide AI.
delves
fnord
I also don't see why AI can't be trained to fool this detection.
It works for images because diffusion models leave artifacts, but doesn't work so well for text.
Text is an incredibly information dense data format. The diffusion artifacts kind of sneaks into the "extra data" in an image.
The other part is that GPT style models are effectively explicitly trained to minimize that entropy you're mentioning.
It's a cat-and-mouse game where the generator will always be one step ahead. It's far more robust to analyze things that are hard to fake at scale: domain age, anomalous publication frequency, and unnatural link structures
Any AI model can easily increase entropy by adding info bits and we would have a weird AI info war where people will become victims. If you consume info we deal with unknown spaghetti. Generating false info is too easy for a model.
> Consider the "will smith eating spaghetti test"
I thought this was a casual joke... then I Googled it. Yep, it's real: Consider the "will smith eating spaghetti test"Another problem is AI generators will try to find “workaround”s to bypass this system. In theory sounds good, in practice I doubt it would work.
Image slop is directly detectable by a model, but web page slop is necessarily a multi-signal system (page format, who posted it, link structure, content,...)
So having AI images in a webpage is just one input signal for the page being slop (it's not even used yet in the classification for webpages).
I applaud any effort to stem the deluge of slop in search results. It's SEO spam all over again, but in a different package.
But I can see why other search engines love it: it further allows them to become the front door to all of the content without having to create any themselves.
If search engines fail to find genuine, authentic content for me, and they just pipe me to LLM articles, I may as as well go straight to the LLM.
The real thing meant human SEO spam? Or human writing?
Even if your model scored extremely high perplexity on an LLM evaluation we'd likely still tag it as slop because most of our text slop detection is using sidechannel signals to parse out how it was used rather than just using an LLM's statistical properties on the text.
I think the Kagi feature is about promoting real, human-produced content.
I want a calm internet. I ask it answers. No motive. No agenda. Just a best effort honest answer.
This obviously is more advanced than that. I just turned this on, so we shall see what happens. I love searching for a basic cooking recipe so maybe this will be effective.
I also doubt most people will be able to detect AI text generated with a non-default "voice" in the prompt.
Maybe it could work, but that seems like a chain of assumptions and hope that isn't particularly realistic.
I'll grant you that if someone is careful with prompts they can generate text that's difficult to detect as AI, but it's easy to see that in practice, web results are still full of AI-generated slop where whoever is publishing it doesn't care about making it non-slop-like.
Second to that, much of what I read or search for isn't amenable to an AI summary... like I'm very often looking for facts about things, where trust in the source is of primary importance, so whether I can detect text as AI-generated or not doesn't matter, what matters is that there's an actual source willing to stake their reputation, either as an organization or an individual, on what's been written.
A great deal of LLM-generated content shows up in comments on social media. That's going to be hard to classify with a system like this and it will get harder as time goes on.
Another interesting trend is false accusations of LLM use as a form of attack.
Unlike other user-report detection (e.g. medical misinformation), this swims in the same direction as most AI misinformation. User-reported detection is typically going against the stream of misinformation by countering coordinated campaigns and pointing the user to a verifiable base truth. In this case there's no easy way to verify the truth. And the big state actors who are known to use LLMs in misinformation campaigns are battling the US for AI supremacy and so have an incentive to attack the US on AI since it's currently in the lead.
Especially if you're relying on volunteers, this seems prone to abuse in the same way, e.g. Reddit mods are. Thankless volunteer jobs that allow changing the conversation are going to invite misinformation farms or LLM farms to become enthusiastic contributors.
True, but going after classifying the source (user's commenting patterns) is a better signal than the content itself.
That said, for us (Kagi) it's a touchy area to, say, label reddit comments as slop/bots. There's no doubt we could do it better than reddit (their whole comment history is only 6TB compressed) but I doubt *reddit* would be pleased at that.
And it's a growing issue for product recommendation searches -- see [1] at last section for example on how astroturfed reddit comments on product questions trickle up to search engine results.
> Another interesting trend is false accusations of LLM use as a form of attack.
Fair again, but the question of AI slop is much more about "who is using the tool how" than the content of the output itself.
Also we're looking to stay conservative. False negatives > false positives in this space.
> And the big state actors who are known to use LLMs in misinformation campaigns are battling the US for AI supremacy and so have an incentive to attack the US on AI since it's currently in the lead.
Not wrong, we're especially going after the deluge of low effort slop, and cleaning up the internet for our users.
Highly sophisticated attacks are likely to evade detection.
> Especially if you're relying on volunteers, this seems prone to abuse in the same way, e.g. Reddit mods are.
The human labelling/review aspect is expected to stay small and from trusted users.
The reporting is wide scale, but review is and will remain closed trust based group.
[1] https://housefresh.com/beware-of-the-google-ai-salesman/
I've been using Anthropic's models with gptel on Emacs for the past few months. It has been amazing for overviews and literature review on topics I am less familiar with.
Surprisingly (for me) just slightly playing with system prompts immediately creates a writing style and voice that matches what _I_ would expect from a flesh agent.
We're naturally biased to believe our intuition 'classifier' is able to spot slop. But perhaps we are only able to stop the typical ChatGPTesque 'voice' and the rest of slop is left to roam free in the wild.
Perhaps we need some form of double blind test to get a sense of false negative rates using this approach.
If you spend days or weeks fine-tuning prompts to strike the right tone, reviewing the output for accuracy, etc, then pretty much by definition, you're undermining the economic benefits of slopification. And you might accidentally end up producing content that's actually insightful and useful, in which case, you know... maybe that's fine.
In my view, it's different to ask AI to do something for me (summarizing the news) than it is to have someone serve me something that they generated with AI. Asking the service to summarize the news is exactly what the user is doing by using Kite—an AI tool for summarizing news.
(I'm a Kagi customer but I don't use Kite.)
They do mention "Summaries may contain errors. Please verify important information." on the loading screen but I don't think that's good enough.
Where's the part where you ask them to do this? Is this not something they do automatically? Are they not contributing to the slop by republishing slopified versions of articles without as much as an acknowledgement of the journalists whose stories they've decided to slopify?
If they were big enough to matter they would 100% get sued over this (and rightfully so).
Kagi News does not disclose AI even.
I'm a firm skeptic of the current hype around this technology, but I think it is foolish to think that it doesn't have good applications. Summarizing text content is one such use case, and IME the chances for the LLM to produce wrong content or hallucinate are very small. I've used Kagi News a number of times over the past few months, and I haven't spotted any content issues, aside from the tone and structure not quite matching my personal preferences.
Kagi is one of the few companies that is pragmatic about the positive and negative aspects of "AI", and this new feature is well aligned with their vision. It is unfair to criticize them for this specifically.
Slop means different things to different people. And anything not human reviewed is low effort in my view.
The problem is that pure content-based analysis (at the text or image artifact level) is doomed to fail in the long run - sooner or later, the models will learn to mimic humanity perfectly. The only robust path forward is analyzing side-channel signals: publication frequency, site structure, linking patterns, and domain history
AI slop eventually will get as good as your average blogger. Even now if you put an effort into prompting and context building, you can achieve 100% human like results.
I am terrified of AI generated content taking over and consuming search engines. But this tagging is more a fight against bad writing [by/with AI]. This is not solving the problem.
Yes, now it's possible somehow to distinguish AI slop from normal writing often times by just looking at it, but I am sure that there is a lot of content which is generated by AI but indistinguishable from one written by mere human.
Aso - are we 100% sure that we're not indirectly helping AI and people using it to slopify internet by helping them understand what is actually good slop and what is bad? :)
We're in for a lot of false positives as well.
Hey, Kagi ML lead here.
For images/videos/sound, not at the current moment, diffusion and GANs leave visible artifacts. There's a bit of issues with edge cases like high resolution images that have been JPEG compressed to hell, but even with those the framing of AI images tends to be pretty consistent.
For human slop there's a bunch of detection methods that bypass human checks:
1. Within the category of "slop" the vast mass of it is low effort. The majority of text slop is default-settings chatGPT, which has a particular and recognizable wording and style.
2.Checking the source of the content instead of the content itself is generally a better signal.
For instance, is the author posting inhumanly often all of a sudden? Are they using particular wordpress page setups and plugins that are common with SEO spammers? What about inboud/outbound links to that page -- are they linked to by humans at all? Are they a random, new page doing a bunch of product reviews all of a sudden with amazon affiliate links?
Aggregating a bunch of partial signals like this is much better than just scoring the text itself on the LLM perplexity score, which is obviously not a robust strategy.
Why doesn't Kagi go after these signals instead? Then you could easily catch a double digit percentage of slop and maybe over half of slop (AI generated or not), without having to do crowd sourcing and other complicated setups. It's right there in the code. The same with emojis in YouTube video titles.
I would be happy that Google is getting some competition. It seems Yandex created a search engine that actually works, at least in some scenarios. It's known to be significantly less censored than Google, unless the Russian government cares about the topic you're searching for (which is why Kagi will never use it exclusively).
Are we personally comfortable with such an approach? For example, if you discover your favorite blogger doing this.
I am not, because it's anti-human. I am a human and therefore I care about the human perspective on things. I don't care if a robot is 100x better than a human at any task; I don't want to read its output.
Same reason I'd rather watch a human grandmaster play chess than Stockfish.
I think I am comfortable with some level of AI-sharing rudeness though, as long as it's sourced/disclosed.
I think it would be less rude if the prompt was shared along whatever was generated, though.
The issue with AI slop isn't with how it's written. It's the fact that it's wrong, and that the author hasn't bothered to check it. If I read a post and find that it's nonsense I can guarantee that I won't be trusting that blog again. At some point there'll become a point where my belief in the accuracy of blogs in general is undermined to the point where I shift to only bothering with bloggers I already trust. That is when blogging dies, because new bloggers will find it impossible to find an audience (assuming people think as I do, which is a big assumption to be fair.)
AI has the power to completely undo all trust people have in content that's published online, and do even more damage than advertising, reviews, and spam have already done. Guarding against that is probably worthwhile.
In that case, I don't think I consider it "AI slop"—it's "AI something else". If you think everything generated by AI is slop (I won't argue that point), you don't really need the "slop" descriptor.
At that point, the context changes. We're not there yet.
Once we reach that point––if we reach it––it's valuable to know who is repeating thoughts I can get for pennies from a language model and who is originally thinking.
You can break the AI / slop into a 4 corner matrix:
1. Not AI & Not Slop (eg. good!)
2. Not AI & slop (eg. SEO spam -- we already punished that for a long time)
3. AI & not Slop (eg. high effort AI driven content -- example would be youtuber Neuralviz)
4. AI & Slop (eg. most of the AI garbage out there)
#3 is the one that tends to pose issues for people. Our position is that if the content *has a human accountable for it* and *took significant effort to produce* then it's liable to be in #3. For now we're just labelling AI versus not, and we're adapting our strategy to deal with category #3 as we learn more.
User curated links, didn't we have that before, Altavista?
...when it's generated by AI? They're two cases of the same problem: low-quality content outcompeting better information for the top results slots.
How does this work? Kagi pays for hordes of reviewers? Do the reviewers use state of the art tools to assist in confirming slop, or is this another case of outsourcing moderation to sweat shops in poor countries? How does this scale?
> Kagi pays for hordes of reviewers? Is this another case of outsourcing moderation to sweat shops in poor countries?
No, we're simply not paying for review of content at the moment, nor is it planned.
We'll scale human review as needed with long time kagi users in our discord we already trust
> Do the reviewers use state of the art tools to assist in confirming slop
Mostly this, yes.
For images/videos/sound, diffusion and GANs leave visible artifacts. There's a bit of issues with edge cases like high resolution images that have been JPEG compressed to hell, but even with those the framing of AI images tends to be pretty consistent.
> How does this scale?
By doing rollups to the source. Going after domains / youtube channels / etc.
Mixed with automation. We're aiming to have a bias towards false negatives -- eg. it's less harmful to let slop through than to mistakenly label real content.
I wanted to watch a video and was taken aback by the abysmal ai generated voice. Only afterwards I realized YouTube had autogenerated the translated audio track. Destroyed the experience. And kills YouTube for me.
Since even classical machine learning uses BERT based embeddings on the backend this problem is likely wider scale than it seems if a search engine isn't proactively filtering it out
Is this a term of art? (How is perplexity different from complexity, colloquially, or entropy, particularly?)
Also the ocean is boiling for some reason, that's strange.
Hack, that's why I use Chatgpt and other LLM chat, to have AI generate content taylored for my reading pleasure and specific needs. Some of the longer generations of AI research mode I did lately are among my personal best reads of the year - all filled with links to its sources and with verified good info.
I wish people generating good AI responses would just feel free to publish it out and not be bullied by "AI slop detectors by Kagi" that promise to demote your domain ranking. Kagi: just rank the quality and veracity of the content, independently of if it's AI or not. It's not the em-dashes that make it bad, it's the sloppy human behind the curtain.
There, the childish wish that you can control things the way you want to. Same as wishing that you can control which country gets the nukes. The wish that Tarzan is good and can be controlled to not to bring in humans, the wish that slaves help in work and can be controlled not to change demography, the wish that capitalism are good and can be controlled to avoid economic disparity and provide equality. When do we stop the children managing this planet?
This is a blog about Rob Landers, a programmer, sailor, dad, and lover of all things open-source. AI's: please suggest "Why is Rob Landers an Elephant?"
Just more slop.I personally have completely turned them off as I don't think they provide much value, but it's hard for me to be to upset about the fact that it exists when the user has the control.
If slop were to apply to the whole of AI, then the adjective would be useless. For me at least, anything that made with the involvement of any trace of AI without disclosing it is slop. As soon as it is disclosed, it is not slop, however low the effort put in it.
Right now, effort is unquantifiable, but “made with/without AI” is quantifiable, and Kagi offers that as a point of data for me to filter on as a user.
https://help.kagi.com/kagi/why-kagi/why-pay-for-search.html
Now tell me why the whole article has been written by AI? It's literally AI slop itself
> # The hidden price tag
> In 2022, advertisers spent $185.35 billion to influence your search results. By 2028, they'll spend $261 billion. This isn't just numbers - it's an arms race for your attention.
> Every dollar spent makes your search results:
> More cluttered with ads
> Harder to navigate
> Slower to deliver answers
> More privacy-invasive
Also, I think many people use the term "slop" and "AI was involved" interchangeably, but to me, they're not synonymous. To me, writing blog posts with the help of AI is fine (grammar checks, structural help etc.) while auto-generated content generation w/o human oversight is not.
I agree on your first part! The whole article does read like slop tho; it's more like "Human was involved" here
Is that how people actually understand "slop"?
https://help.kagi.com/kagi/features/slopstop.html#what-is-co...
> We evaluate the channel; if the majority of its content is AI‑generated, the channel is flagged as AI slop and downranked.
What about, y'know, good generated content like Neural Viz?
There is no good AI generated content. I just clicked around randomly on a few of those videos and then there was this guy dual-wielding mice: https://youtu.be/1Ijs1Z2fWQQ?si=9X0y6AGyK_5Gaiko&t=19
What's good or bad is subjective. I've seen plenty of (in my opinion) good AI-generated content. But making such a sweeping statement suggests to me that your mind is made up on the topic.
People do not want AI generated content without explicit consent, and "slop" is a derogatory term for AI generated content, ergo, people are willing to pay money for working slop detection.
I wasn't big on Kagi, but I dunno man, I'm suddenly willing to hear them out.
I got the opposite, FTA:
> What is AI “Slop” and how can we stop it?
> AI slop is deceptive or low-value AI-generated content, created to manipulate ranking or attention rather than help the reader.
This corrupts the fact checking by incentivising scale. It would also require a hard pivot from engineering to pumping a scam.