I originally created the site as a way to track which games would be supported on Linux, since at the time the Steam Deck was releasing, and some games were turning to support it. And it has since blossomed into a larger project, which some other tools even pull from! I would have never even imagined that when I first started making this.
I do want to address something I see being talked about in the comments, which is the fact people say that anti-cheats are snake oil, or useless. This is a big misunderstanding, and I feel like those more technically inclined should understand that anti-cheat is a "defense-in-depth" type of approach. Where it is just one of many lines of defense. Some anti-cheats are pretty useless, and don't do much, but some actually do try and protect the game you're playing. But, just like DRM, it can be cracked, and that's why it's more of a constant arms race, rather than a one and done thing.
I'm writing out a longer post about this for the future, but just know that without anti-cheat clientside, it would be far too easy for an attacker to cheat in these games. We're still ways out from letting AI (see VACnet [1] and and Anybrain [2]) determine if someone is cheating server-side, so for now we have to rely on heavier client-side techniques and server-side decision making.
Also if anyone has questions about the site (or for me), I'll try to answer them here when I see them. If not, have a nice day!
As a serious player of many multiplayer games I disagree. All it takes is one cheat to circumvent the protections and soon enough every cheater will use that circumvention.
Meanwhile, I, the legitimate player suffer from degraded performance, disconnections (looking at you Amazon Games - you've not been able to fix your (most likely) Easy Anticheat disconnection issue in 2 years!), or outright inability to play.
Perhaps the cheating situation would be worse without anticheats, but considering how rampant it seems to be in fast-paced or grindy games I play, I kind of doubt it.
The best anti cheat is proper net code. Games rarely do this because it's expensive and difficult. Consumers will buy it anyways.
Anti cheat overtop is like calling an open window with a loud Weiner dog guarding it "defense in depth".
Take the analogy of enabling better police work by granting unlimited access to our private communications. No one doubts it would be effective, but the cost and the threat is too much.
This is the line we draw in the sand: get out of the kernel, anti-cheat has no business being there. The cost and threat are too great.
This acceptance is the same situation that brought us the Crowdstrike incident. It's unacceptable.
We fail as an industry and as a society when we accept these compromises.
Believe it or not, most people don't play video games against strangers. Anti-cheat is not of any value to them. Even for people who do play video games against strangers even uncompromised anti-cheat doesn't stop many forms of cheating like macro-mouses. Especially now with all the success being shown at machine learning playing video games with nothing more than a video feed and the button inputs, the amount that anti-cheat can help is clearly quite bounded and getting worse over time.
And the cost? Anti-cheat comes at the cost of general purpose computing, at the cost of being able to control the computers with which you trust your most intimate secrets. It's a civil liberties nightmare, or at least a per-requisite technology for many such nightmares. Opposition to anti-cheat is opposition to RMS's Right to read dystopia (https://www.gnu.org/philosophy/right-to-read.en.html).
I don't think it's too far a leap that saying that anti-cheat or DRM technology that comes at the expense of the availability of general purpose computing is more of a problem for human rights than the farcical bedroom cameras I started with.
So when you advocate anti-cheating technology that locks users out of controlling their own computers, you're favoring an at-best incremental improvement which can still be evaded for a narrow application that most people don't care about... and this comes at the expense of imperiling the human rights of others.
Like with many things there is an asymmetry to the costs: Anti-cheat and DRM substantially fail if even a moderate amount of dedicated people still have a way to cheat. Yet the damage to people's freedom from the loss of general purpose computing is still substantial even when the lockdowns can be evaded.
If anti-cheat came at no meaningful cost the fact that it could be evaded wouldn't be a meaningful argument against it. But it's expensive to develop, intrusive, disruptive, and the more successful it is the more effective it'll be at being abused to deny people control of their computers in anti-social ways.
But, in practice, it usually doesn't result in any new cheaters. There is a myriad of reasons for this, but I won't go over them here.
Why do we need separate anti-cheat programs? Can't the operating systems simply have an option when creating a process that prevents all operations looking at the memory of the process (and maybe if such a process is about to be launched the user has to explicitly accept that by clicking a button)? Wouldn't that stop almost all the cheats without needing separate anti cheat programs, since I assume those programs have to use OS facilities to mess with the game anyway.
Of course nowadays DRMs are sort of baked-in, so I guess anti-cheats could be too?
Already the case for userspace programs, due to virtual memory
> those programs have to use OS facilities to mess with the game anyway.
Cheats today essentially are like drivers, they do not run as userspace programs. Hence, they can do literally anything on your computer. In terms of privileges, driver code runs at a level as privileged as the operating system. Hence the need for programs that run at the level of the OS kernel to catch the cheats.
Userspace programs can read other userspace programs memory, it's part of the standard win32 api[0].
> Cheats today essentially are like drivers, they do not run as userspace programs. Hence, they can do literally anything on your computer. In terms of privileges, driver code runs at a level as privileged as the operating system. Hence the need for programs that run at the level of the OS kernel to catch the cheats.
Some cheats nowadays do this, but they do this because of anti cheat programs. If there were no anti-cheat programs, they wouldn't have to do this.
[0] https://learn.microsoft.com/en-us/windows/win32/api/memoryap...
If you want to know why the OS doesn’t enforce this - https://slashdot.org/story/432238 you roll into HN’s other favourite topic of “why can’t I run the X of my choice on my OS?”
https://en.wikipedia.org/wiki/Cheating_in_online_games#Sandb...
- Have the user-facing OS be a VM managed by that hypervisor
- Have the game process run under a second sibling VM
The hypervisor can then mediate hardware access and guarantee nothing from VM A can access VM B nor the other way around.
IIRC WSL2 enables such a mode, both the Windows OS the user sees and the Linux VM run under Hyper-V as siblings VMs.
And Xbox One and up do EXACTLY the above: each game runs in its dedicated VM (I presume that's what "trivially" enables Quick Switch/Resume via pausing/shapshotting the VM) and apps run in another.
Tangent: I somewhat wish MS would allow WSL2 on Xbox.
[1] https://r6fix.ubi.com/projects/RAINBOW6-SIEGE-LIVE/issues/LI...
Having anticheat ban everyone doesnt make it good. What makes anticheat good is it banning cheaters while leaving honest players not.
Apparently it's not really effective at all.
And one thing the devs could do without Anti-Cheat, is to automate analysis of e. g. head shot rate, movement speed, etc. but most games not do that. If average player make 25 Kills per hour in a game and some 150 over longer periods i did not need an anti cheat to do something.
Consider, for example, professional gamers. They spend countless hours practicing, and they can easily outcompete casual gamers who don't have the time to refine their skills daily.
Statistical anti cheat is extremely weak in any game where legitimate human players can end up as outliers.
Anti-Cheat software https://en.wikipedia.org/wiki/Cheating_in_online_games#Anti-...
We reliably use statistical process control to automatically calibrate incredibly precise, nanometric-scale machinery for purposes of semiconductor engineering. Surely, with the extreme amount of data available regarding every player's minute inputs in something like a client-server shooter, you could run similar statistical models to detect outliers in performance. With enough samples you can build an extraordinarily damning case.
The only downside is that statistical models will occasionally produce false positives. But, I've personally been "falsely" banned by purely deterministic methods (VAC) for reasons similar to others noted in this thread (i.e. leaving debugging/memory tools running for a separate project while playing a game). So, in practice I feel like statistical models might even provide a better experience around the intent to cheat (i.e. if you aren't effectively causing trouble, we dont care).
Battlefield started out using PunkBuster, one of the earliest kernel-level anti-cheats. With Battlefield 4, they used FairFight, a statistical server-side solution, alongside PB.
With Battlefield 1, they dropped PB, and operated with just FairFight.
And now, EA have decided to create their own kernel-level AC, called EA AntiCheat, and are implementing it on BF5 and BF1, largely because FairFight was not enough.
But I think collecting all that data and sparingly using it is the best approach. You could combine that with headshot rate, etc. and really narrow down relatively reliably.
I avoid these titles myself. In fact, I don't run wine, steam or game console emulators on my Linux workstation. I run Windows VM:s for isolation and security.
The cheaters don't make them, they buy them. It really needs a multi factor solution. The technical solution is not enough. Trying to buy cheats should be like trying to buy chemical precursors to illicit drugs. There should be a strong social stigma. Most cheaters have no problem with it because 'everyone else is cheating', justifying their behavior. There was a time when 'everyone else smokes' was justification, but now it's mostly defeated. There should be real world implications. Sign in with your phone number and 2 factor auth, which is located to a physical address. Cheating is a form of fraud. There should be legal implications.
oh my. Seeing your posts makes me sincerely want to lobby to ban video games at least if adding additional liabilities to distributing software or computing devices were actually a direction that the games industry was promoting.
We need to stop letting stupid entertainment companies trample our rights to narrowmindedly maximize their profits.
Totally agree, both should be absolutely legal and accepted
It's just that I use my machine for more stuff than gaming; and for anything else I'd really rather not have it on there at the same time.
The only difference is that maybe you have a few less rage hackers that get caught by it, but anyone that really wants to cheat will still be able to, it's just a lot harder for players to see. All they care about is the public perception. If it looks like it has less cheaters, it's good enough for them.
The cost? You basically install malware from a Chinese company in you computer...
to me, competitive video games are far gone like pro cycling in terms of the extent players go to feel "superior" than others.
<rant> many of these games remain broken with other things while raking in insane amounts of money, so regularly maintaining anti-cheat inside the game, if at all, is probably very low in their backlog.
the third-party ones are then used to not having to think about this, but even these providers are more focused on attracting game publishers than doing something meaningful. </rant>
personally, it should be possible for games that can be played in local multiplayer or with friends to have a way to play it without anti-cheat. don't allow competitive modes with it, but having an option will alleviate a lot of these issues.
https://pbs.twimg.com/media/GH3CPPHXwAAMR3i?format=png&name=...
That said, you "may" have a chance at detecting it using game related metrics on server side. Because an AI will very probably betray itself at some point, "AI"s are usually imperfect like human.
Elephant in room, the more you put big brother in your system, the less you will be able to run really free operating systems. So long for your digital freedom.
Look at the abominations which are video game consoles.
It is obcene to have to pay a lot of money for completely locked/digital jail devices. It should be illegal, period. They should be leased for cheap.
Why can't the servers distrust the clients? What should a 'client side anti cheat' actually prevent?
The way I think I'd tackle such things is to have multiple copies of each character model moving in different locations and different ways. Such that trying to spy on the state of the game from one client's viewpoint yields mostly false data. New 'threads' would fork off of the existing threads and would only be culled when there are too many or they're about to make a side effect that would be visible if they were real. In that way the server would be responsible for feeding misinformation to clients but maintaining the state of the true game as a secret to itself.
There are two issues. One is the user seeing things that the server is hiding, such as enemies hidden behind obstacles, by going into "wireframe mode". The other is superhuman performance via computer assistance, or "aimbot hacks".
The first is a performance issue. The server can do some occlusion culling to avoid telling the client about invisible enemies, but that adds to the server workload. The second is becoming impossible to fix, since at this point you can have a program looking at the actual video output and helping to aim. (You can now get that in real-world guns.[1]) Attempts to crack down on people whose aim is "too good" result in loud screams from players whose aim really is that good.
[1] https://talonprecisionoptics.com/technology/how-it-works/
In the future I kind of hope the handshake from controller<->console becomes a lot more robust, maybe working in a similar way to HDCP.
Thanks to the neural network, we have made enormous progress in the computer vision domain. As a byproduct, it invalidates the method we use to separate machines from humans (the image-based CAPTCHAs).
I guess aimbots will switch to CV-based systems to detect enemies rather than dumping game memory to find the enemy's position. This change will force anti-cheat systems to perform an automated Turing test, which is hard. (Telling the bot and human apart only by watching the replay is much more challenging compared to the above CAPTCHA problem. And we are currently losing at the CAPTCHA frontline, too.)
Apply that to every interaction that the server has to be authoritative about, movement, reloading.
Your game will be unplayable.
And if you want to combat aimbotting: your viewport and hit point would have to be server authoritative too.
Basically: unless its Stadia or geforce now, this wont work.
It should be clear that servers already do not trust the client, they do many checks hence you don't see teleportation hacks in games like Counter strike or Valorant. There used to be cheats in the counter strike games like "nospread" where you could have 100% pixel perfect aiming but that was because the the client was trusted however now in most games with some randomness in bullet spray patterns the random seed is different between the client and server so something like "nospread" are no longer possible.
You might be stumbling upon "fog of war" that is not sending data to a client unless the enemy player is close to visible which is a thing. It's widely used and I'd say effective in MOBA/MMORPG/RTS games however in FPS games fog of war is many times more computationally expensive which matters at the scale of games these days. It has been a thing for a long time in counter strike with server plugins like "SMAC anti wall hack" or "server side occlusion culling" however the implementations sometimes have not been perfect and require significantly stronger servers. https://github.com/87andrewh/CornerCullingSourceEngine
Riot games also implements fog of war at scale in Valorant and has a blog post covering some of the issues they overcame. One thing you can see the gif at the end of the blog post, even though fog of war is effective it is only effective in reducing the effectiveness of wall hacks and wall hacks still provide a significant advantage. https://technology.riotgames.com/news/demolishing-wallhacks-...
The important reason I suggested MULTIPLE clones of a character and only forking new paths off of existing characters in the world is that it should eliminate any information oracle about which of those is the real character.
The popular cheats are "the client says the player just clicked at (1030, 534) on the screen", which is a totally valid move, except it's calculated by the cheat instead of the player.
* Aim Assist - what's that supposed to work with for the assist? I guess it might help someone target a player once they're exposed, or once they've locked on. For that I think that extremely top tier players might behave within fuzzing distance of tool assist, at least some of the time. Dodging might have similar issues. I could even see ML assisting inputs just based on frame-grabs off the screen video output. -- So I'm not sure what client side anti-cheat is supposed to do here.
* HUD improvements - like what?
sure if you develop platform today we can check token user now with hashtable we have in database but in games ?? You cant verify calculated damage numbers users gave, not fast enough
This type of cheats are DECADES in the past.
Today is all about a) enhancing normal behavior with artificial precision, not making any 'illegal' (from game perspective) actions. b) giving player information he isn't supposed to have but that is passed to client for latency sake
The traditional anti-cheat can be just slapped after the game is developed in most games. If the game is very successful then you can just update the game with extra paid protections provided by the anti-cheat tool.
The alternative is local game engine that works with a partial game state which is a challenge on it self. If you still can make it work, you will still have to deal with people "modding" the client to gain an advantage. E.g.: enemies are painted red instead of camouflage.
I don’t mean to sound harsh, but it’s tough to tackle this kind of misconception because it’s stated with such certainty that others, who also might not know any better, just take it as fact.
Here’s the thing: Multiplayer servers trust clients mainly for performance reasons. In AAA game development, anti-cheat isn’t something we focus on right from the start. It typically becomes a priority post-alpha (and by alpha, I’m talking about an internal milestone that usually spans about a year—not the "alpha" most people think of which is usually closer to an internal "beta", and "public beta" is more like release candidate 1). During that time, the tech team is constantly working on ways to secure the game. (make it work, make it correct*, make it fast).
If we were to bake in anti-cheat measures from the very beginning of a project, it would force us to scale back our ambitions. Some might argue that’s a good thing, but the truth is, we’d also risk missing critical milestones like First-Playable or Vertical Slice. You simply can’t tackle everything at once—focus is a measure primarily of what you are not doing, after all.
Back when I was working on The Division, we had some deep discussions about using player analytics and even early forms of machine learning to detect "too good" players in real-time. This was in 2014, well before the AI boom. The industry's interest in new anti-cheat methods has only grown since then, I promise you this.
At the end of the day, games are all about delivering an experience. That’s the priority, and a solid anti-cheat system is key to ensuring it. Endpoint security is currently the best solution we have because it doesn’t bog down the client with delays or force awkward mechanics like rollbacks or lock-step processing. Plus, it lines up with the (very heavy) optimisations we already do for consoles.
Nobody in this industry wants to install a rootkit on your PC if we can avoid it. It’s just the best trade-off (for all parties, especially gamers) given the circumstances. And let's be clear—these solutions are far from cheap. We pay a lot to implement them, even if some marketing material might suggest otherwise.
I'm sorry but this really does read like the start of a troll post.
Servers very much distrust the client. Obviously. That's literally rule #1. Don't trust the client!
Comments like yours are extremely irritating. Please don't behave this way with your co-workers.
Anyhow, there's all kinds of types of cheats for different kinds of games. There's a variety of mitigations for each kind. I don't think there's a multiplayer shooter on the planet that has fully solved aimbots. For however clever you think you are I promise the cheat makers are much, much more clever. :)
Said this without even flinching or having a second thought.
Bravo.