https://bsky.app/profile/willhaycardiff.bsky.social/post/3lk...
I've also seen evidence of posts Twitter likes (violent and hateful anti-immigration posts - literally a photo of a dummy tied to a chair being shot in the back of the head) being spammed by love bots.
Twitter seems to be a propaganda channel, run by Donald/Elon/et al.
Im saying this for ages and never joked.
Plenty of real situations happening like blocking certain people, stoping of fact checking, bot protection and detection etc.
There is a reason why Twitter needed more people before
>https://bsky.app/profile/willhaycardiff.bsky.social/post/3lk...
This could also very well be explained by a ranking algorithm that optimizes for "engagement". Getting spammed by hate bots = "engagement". This would be perfectly consistent with what the guy is experiencing, minus the accusation that the platform is suppressing anti-ukraine posts, which is totally unsubstantiated.
Given the post was suppressed, how did the hate bots know about it to spam it?
It seems to me Twitter suppressed the post until they had time to spam it with hate posts.
Bear in mind here also this suppression did not happen for other posts - only for the pro-Ukraine post - so Twitter at the least is specifically suppressing pro-Ukraine posts.
> And if you think this only happens on one social network, you’re already caught in the wrong attention loop.
> The most effective influence doesn’t announce itself. It doesn’t censor loudly, or boost aggressively. It shapes perception quietly — one algorithmic nudge at a time.
I think twitter is uniquely concentrated in its influence by its owner and willingness to do things so blatantly, other platforms need to at least pretend to not steer things so directly as not to upset shareholders.
Edit: is this why 4chan was hit with the disruption - because there's no room for this delay mechanism?
what approach do you think HN takes, and how many bots do you think are here? Cuz I don't see any ads...
We are at the early stages of this, so we are watching the capture of influence. There is some discussion that influence is the new capital. And we are replicating the systems that allow for the accumulation of capital in this new digital age.
He basically handed the site over to the IRA and told them to go nuts.
Personally, I’ve stepped away from anything associated with X.com or Elon Musk. I deleted my accounts, disconnected from the ecosystem entirely—and life is better for it. No doomscrolling, no algorithmic nudging, no subtle behavioral conditioning. Influence may be the new capital, but opting out is still a form of agency. Disengagement can be a powerful act.
We often forget: participation isn’t mandatory.
Then Elon started taking testosterone (or whatever it was that jacked up his aggression), using psychedelics, and became incapable of keeping his mouth shut. To compound it he then got involved in politics.
Now I will never buy a Tesla, starlink, or anything else he's involved in because his behavior represents a real risk that any of those companies might cease to exist if Elon gets high and does something stupid, then I'll be stuck without support.
Similarly, a social media account is an investment. I would never invest my time into building relationships on a platform like X. Even if it does survive Musk, the community is broken permanently.
There is a lot to gain for the powerful if they can convince those that they wish to hold that power over that the "grapes are sour", so to speak. That leaves less people fighting for the few grapes available, as we stretch this analogy to its breaking point.
No man is an island, and all that. If the holders of influence decide to start a war, you are in it if you like it or not.
https://web.archive.org/web/20150501033432/https://voat.co/
But then they instituted karma-based throttling on participation:
https://web.archive.org/web/20170520210511/https://voat.co/v...
That, plus the influx of racists and misogynists chased off of Reddit, led to a snowball effect where the bigots upvoted themselves into power-user status and censored anyone who stood against them, which discouraged normies from sticking around, which further entrenched the bigotry. Within a few years, virtually every single new post on the site was radically right-wing, blatantly racist/sexist/antisemitic neo-Nazi shit:
https://web.archive.org/web/20200610022710/https://voat.co/
The site shut down by the end of 2020 from lack of funding.
You can see basically the same thing happening on Xitter, it's just slower because the starting userbase was so much larger, and Elon (for now) can continue to bankroll it.
Even in regular posts, Reddit has been a hive mind lately. If you scroll through the comments, most of them will have the same opinion over and over, with comments that add nothing to the discussion, like "I agree," getting hundreds of upvotes.
But I agree, since the API thing, it has sucked HARD.
This didn’t start with the API change drama. The API change protests were their own crusade. The calls to ban Twitter links or AI art are just the next iterations of the same form of protest.
Many of the big subs were thoroughly astroturfed long before the API changes. The famous ones like /r/conservative weren’t even trying to hide the fact that they curated every member and post
That has been the case for over 10 years now. It's absolutely not a new phenomenon.
But I don't think the "crusades" are always bot related. Movements get momentum.
AI art does not exist. There is only slop stolen from artists.
I was heavily involved in buying/selling spam accounts for years on reddit. If you think it wasn't heavily manipulated, at least the frontpage, then lol you were buying it like everyone else.
This is just practical given you can't see tweet threads (and sometimes even tweets) without an account.
> against banning AI Art
I think you mean to say reddit is pro-banning AI art?
Anyway, banning AI art is absolutely good for curating quality posts. AI art is incredibly low-effort, easily spammable, and has legitimate morality concerns among artist communities (the kind that post high quality content). Same goes for obviously AI-written posts.
I agree content quality on the site has fallen drastically, but those are both measures to try and save it.
That's what did it for me, zero Reddit unless I can't find the information anywhere else, and even then it's for viewing a single post and then I'm gone.
There exist people who think Biden had a better shot and replacing him with Harris was a mistake? Did they not look at his approval ratings earlier that year, then look up what that's historically meant for presidential re-elections? Dude was gonna lose, and by the time of the replacement he was likely gonna get crushed. The replacement probably helped down-ballot races, given how badly Bien was likely to perform, so was a good idea even though she lost.
Like, yes, it was per se bad but people blaming that for the defeat is... confusing to me.
r/redditminusmods used to track this. Every 12 hours they'd take a snapshot of the top 50 posts and then check ones from the previous 12 hour snapshot to see what percentage had been deleted. When it started, it was averaging 20% or so. By the end, it was at 50/50 or 49/50 deleted almost every single 12 hour period.
Of course, reddit couldn't allow this level of scrutiny, so they banned that subreddit for unstated reasons, and now the only good google result for it actually leads back here. See for yourself how bad it was: https://news.ycombinator.com/item?id=36040282
That only goes to two years ago. It feels like it's gotten even worse since then. That's not even going into some subreddits (worldnews, politics, etc.) creating the illusion of consensus by banning anyone with an opinion outside of a narrow range of allowed ones.
Is this "mods run amok" or is it the bots gaming the algorithm more effectively and now account for nearly half of all new popular content?
In general my advice to anyone considering Reddit is to start with the default list of subreddits that you get when not logged in. Delete all of those from your list, and track down the small subreddits that interest you. The defaults are all owned by professional influence peddlers now, and what little actual content seeps through is not worth the effort to filter out.
I already got quite a lot of the data pipeline setup for this, so if anyone wants to collab hit me up!
The website is truly unusable unless you directly go to small niche subreddits and even then you roll the dice with unpaid mods with a power complex.
There's a really interesting pattern where you'll see one person start a thread asking "Hey, any recs for durable travel pants?" Then a second comment chimes in "No specific brands, just make sure you get ones that have qualities x, y, and z". Then a third user says "Oh my Ketl Mountain™ travel pants have those exact traits!" Taken on their own the threads look fairly legit and get a lot of engagement and upvotes from organic users (maybe after some bot upvoted to prime the pump)
Then if you dump the comments of those users and track what subreddits they've interacted on, they've had convos following the same patterns about boots in BuyItForLife, Bidets in r/Bidets, GaN USB chargers in USBCHardware, face wash in r/30PlusSkincare, headphones, etc. You can build a whole graph of shilling accounts pushing a handful of products.
(and the mobile app is just atrocious, RIF was way better in usability, etc)
Important to note, I first saw this specific chart and claim of Musk's heavy handed influence via X. Also, I see plenty of dissenting opinions (in a general sense on Trump, Tariffs, Musk, DOGE, etc) on X. Alternative views definitely have reach.
Also important to note, my posts, where I am very knowledgeable in my domain and will spend an unreasonable amount of time authoring posts to make various points, will garner mere double digit views, so when someone cries about no longer have millions of views for their uneducated hot takes... spare the tears.
> Alternative views definitely have reach.
Yes, but are we in a 1984 situation where that reach is managed behind the scenes. Reach, but perhaps not too much reach. With respect to the chart, how do we know that Twitter users are not largely partitioned? How representative is the fact you saw something compared to other "communities" on X?
All the while, even if you saw a 'dissenting' chart, the fact the chart exists is direct evidence to the power of a subtle shadow-ban effect. It's not about tears and whining, it's that a single act by 'powerful' accounts can control who gets visibility, and who does not. The point is that it is not you, the community that controls what is popular, but it is the powerful accounts that do. That is the issue.
In the US it is not the government paying these sums, it is the billionaires who bought the media outlets. When you look for editorial bias in the US it's not pro-government, it's pro-wealth. Or more specifically pro-wealthy people.
> I would say Wikipedia is even more dangerous because it presents as basic facts.
Can you give some examples of political bias in Wikipedia articles?
https://www.nytimes.com/interactive/2025/04/23/business/elon...
or from webarchive
https://web.archive.org/web/20250423093911/https://www.nytim...
> What people see feels organic. In reality, they’re engaging with what’s already been filtered, ranked, and surfaced. Naturally, I— and I think many humans have this too- often perceive comments/content that I see as a backscatter of organic content reflecting some sort of consensus. Thousands of people replying the same thing surely gives me the impression of consensus. Obviously, this is not necessarily the truth (and it may be far from it even). However, it remains interesting, because since more people may perceive it as such, it may become consensus after all regardless.
Ultimately, I think it’s good to be reminded about the fact that it’s still algorithmic knobs at play here. Even if that’s something that is not news.
there were three main pieces of actual evidence i leaned on:
1. the nytimes story on account silencing, https://www.nytimes.com/interactive/2025/04/23/business/elon...
2. the visible boost of low-value tweets (though i should have connected to the x api hose and quantify the data),
3. this paper — https://github.com/timothyjgraham/AlgorithmicBiasX/blob/3f4c...
which actually didn’t make it to the the post because the essay was basically finished by the time i remembered i had it. I should have included it. it shows an algo change that boosts elon’s reach specifically.
every other source was speculative:
superbowl fiasko - https://www.theguardian.com/technology/2023/feb/15/elon-musk... - little evidence
elon forced 100 engineers to boost his tweets ? - hearsay
there are also supposed whistle blowers - https://substack.com/@theconcernedbird/p-154577954 - but again. no evidence.
And the algorithm is still closed source.
i’m sorry the writing beyond the graph didn’t land for you.
mr gruez - I'll do better next time.
-- the author
I _am_ still seeing lots of recycled content looking for clicks.
"Manufacturing consent", the book by Chomsky and Herman, details techniques that are largely unused in this situation. Chomsky's book by disclosing the hidden editor works against the effect rather than for it.
Here it's closer to a state-run media outlet, with the exact ambiguity that implies: a known editor pretending to be objective, except here the editor only really cares about certain topics, and others are encouraged to roam freely (if traceably).
In Chomsky's case, the editor's power comes from being covert, but only if people are fooled, so the book works to diminish it. In this case, the power comes from the editor being known unstoppable. You have to accept it and know yourself as accepting it -- which means you have to buy in as a fan to avoid seeing yourself as herded, or out yourself as an outsider. Since most people take the default step of doing nothing, they get accumulating evidence of themselves as herded or a fan. It's a forcing function (like "you're with us or against us") that conditions acceptance rather than manufacturing consent.
In this case, articles (showing what happens when you oppose the editor) and ensuing discussions like this (ending in no-action) have the reinforcing effect of empowering the editor, and increase the self-censuring effects. They contribute to the aim and effect of conditioning acceptance. So they might not be helpful. (Better would be the abandonment of a platform when it fails to fulfill fairness claims, but that's hard to engineer.)
Funnily enough we should know that, since Elon promised to open source the algorithm in the name of transparency. But what actually happened is they dropped a one-time snapshot onto GitHub two years ago, never updated it, and never acknowledged it again. Many such cases.
And never forgot the, isElon boolean var that would increase post visibility. lol, what a shame.
It’s not a very sophisticated algorithm, likely because the best people aren’t super keen on working there for WLB reasons.
https://knightcolumbia.org/content/protocols-not-platforms-a...
Honestly asking. For me, a former "public utility" poster, it seemed like the public square for elite opinion and that was what made it a utility. I don't think anyone was saying we need public utility microblogging in general.
Many of the communication outlets through this utility have as well their own web infrastructure, hopefully serving as a single source of truth, whether that looks like wsj.com or whitehouse.gov. Interestingly enough the W3C has a recommendation to publish activities through an interoperable manner. There's even talk of putting the Bluesky protocol through whatever process the IETF uses to create a request for comments.
>See Marsh v. Alabama for this -- a company town was prohibited from barring picketing and pamphleting on private sidewalks. [user?id=rabite]
If people don't already know, the internet is easily manipulated and people tend to get ideas and reassurances of ideas based on what their group's opinions are, and those opinions are manipulated. It's easy to create multiple accounts, easy to change IP addresses, easy to bot comments; anyone can do it and it's easy to automate.
The earliest example I can recall was manipulating the Amazon ratings system, now it's everywhere.
They won't because it would reveal moderation biases and trends.
There are behemoths living among us. There will soon be social media accounts with enough sway to manufacture truth.
What needs to be learned is society is like a national park. Left to its own devices it will end up trashed - people leave garbage, move in and use it for whatever they like. So, we fund a service that keeps parks maintained. We understand the benefits of the National Park Service because they are visible and we are visual creatures. But for some reason we have a more laissez-faire attitude to unchecked accumulation and its downstream effects.
It’s risky for power to be so concentrated. We’re forced to hope for benevolence and there’s no backup plan.
What more can be done to show the orders of magnitude of difference between the most and least powerful?
Trying to achieve this on a for profit platform is pure folly.
These are time wasting machines and were never truly capable of being more than that particularly once their use base reached a size where monthly churn no longer impacts the bottom line of advertising revenue.
I think this is barking up the right tree with the wrong lesson - these things are the same. Elon Musk, for worse mostly, is a social influencer. You can tell because a lot of people follow him. I am sure the algorithm in unreasonably kind to him (as he can write it) but it's also true that a lot of people care what he does and what he does changes what people care about.
The real question here, to me, is: does this kind of mass social calculus make any kind of real sense? Can we actually extend the idea of interest to 219,000,000 people or do we leave the coordinate system at some point? I suspect it doesn't hold up.
I am a long time believer in the need for good algorithmic filtering. There is more happening in the world than I have attention for and I want a machine to help me. Most solutions are quite bad because they are focused on how much money they can make instead of how much they can help. But I think it's a real problem and the bad, money-grubbing algorithms that surround us now are making our lives much worse.
Ultimately I think this comes back to operationalizing human relationships. What does it math for Musk to have that many followers? This is distasteful but real, I fear, in the age we live. Social influence is clearly real and we are measuring it in flawed ways and we should try and improve those flawed measurements.
This is a super interesting way of looking at it.
The math works because of the two-party system.
American politics due to the two-party system is fundamentally dishonest. Issues are packaged across parties and you have to buy everything the party is selling. For example there's probably lots of Republicans that would not mind decently run government-subsidized healthcare and there's lots of Democrats that think the government should respect their right to be armed. But because the parties don't really support these positions, it creates significant pressure for people outside of the party buckets to twist their public political talk. Fundamentally this makes political talk and political social media activity dishonest as well. When owners of social networks become political figures, it basically turns all coefficients in this equation to exponents.
Elon Musk owns the platform. He directly dictates how it works. He ordered engineers to boost his posts by a factor of 1000.
https://www.theverge.com/2023/2/14/23600358/elon-musk-tweets...
Whether some people like EM’s posts is beside the point. It’s manufactured.
At least this is visible boosting. The next step is to boost behind the scenes, entirely unauditable. All of the power (and more) of an editor, none of the accountability.
Any links greatly appreciated.
Is this true on Twitter/X?
If so, what is the rationale?
Random obscure account tweets about a 90 day pause...a few people talk about it....and suddenly news outlets ran with it and the markets freaked out.
"who made you this way?"
"you did."
- american politics circa 2025
Especially if I refuse to debate him and instead hurl insults at him and viciously deride him.
The same is true of the ordinary and the middle-of-the-road people when it comes to fascism.
The best way to create fascists is to attack and histrionically go after non-fascists and demand they conform to our way of thought.
Just being left-wing and going after people out of disgust over their opinions, I've accidentally alienated more people and created more fascists than any of these limp-wrist right-wing conservatives could ever hope to create.
I only realized it years later.
Radicalism begets radicalism.
Along with many other things like heil hitler. Assuming this is actually his account, he is 100% a nazi.
Does the author of this piece take a principled stand against censorship and bias? Or is he just upset that the censorship and bias isn't going in his preferred direction?
Right. Did everyone forget about the twitter files already?
BUT remember that what you see is driven by the people you follow, mostly. Don’t like what they say or their political persuasion? Unfollow.
It would be nice if this were true, but I did an experiment where I created a new account on a VM with a new IP not previously connected to me and it almost immediately started serving me right wing slop regardless of who I followed. It seems obvious to me from this anecdote that it’s not as simple as following or unfollowing.
If people want to post their ideas on the internet, they can do so on their own websites for nearly nothing. Getting someone to listen to you? That’s much more expensive… everyone who’s been used to getting that free from social media should consider that it used to be much harder to get attention for one’s ideas, which used to be assumed to be uninteresting by default.
Does that include the lie that the story was "Russian propaganda"?
Twitter has become a particularly nasty version of it. In the before times, Google, Twitter, Reddit, etc. usually spent their efforts trying to manipulate things in a mostly benign way.
If you like free markets, then you must be opposed to Twitter. This is a market controlled by a few. Competition is rigorously hunted down. Lies and fake social proof packaged into "free speech." Only the chosen ones are allowed audience.
This is the opposite of capitalism. This is the worst of cronyism.
Force switching all accounts to unfollow Democracts and follow Republicans and Elon, signal boosting right wing conspiracy theorists, blocking or suspending left or liberal accounts, it's just naked power centralization all the way down...
The guy likely juices his own numbers, floods posts he likes with botted engagement, etc.
Likes are private so he can delude Trumpers into thinking they're popular in a sea of bots.
There so much misinformation, fabrication and half truths out there. Repeateted over and over again in various forms.
When the full story surfaces two days later, they'll never see that on their cult hub.