A fun thing to keep in mind about software security is that it's premised on the existence of a determined and amoral adversary. In the long-long ago, Richard Stallman's host at MIT had no password; anybody could log into it. It was a statement about the ethics of locking down computing resources. That's approximately the position any security practitioner would be taking if they attempted to moralize against LLM-assisted offensive computing.
"a determined and amoral adversary" - I'd kinda disagree with this (the amoral adversary part being necessary). If you crawl through the vast data breach notification lists that many states are starting to keep - MA, ME, etc. there are so many of them (like literally daily banks, hospitals, etc. are having to report "data breaches" that never ever make the news) - not all of them are happening cause of ransomware. Sometimes it's just someone accidentally not locking a bucket down or not putting proper authorization on a path that should have it. It gets found/fixed but they still have to notify the state. However, if someone doesn't know what they are looking at, or it's a program so it really has no clue what it's looking at and just sees a bunch of data - there's no malicious intent but that doesn't mean that bad things can't happen because that data has now leaked out.
Guess what a lot of these LLMs are training on?
So while Andrey's software is finding all sorts of interesting stuff there's a bunch of crap being generated inadvertently that is just bad.
rms@gnu.ai.mit.edu
It was actually the A.I. Lab at M.I.T. and they already had their own dedicated subdomain for it. This had to have been around 1990-91. And IIRC, the actual admins made a valiant effort to keep all the shell users away from "root" privileges, so it wasn't a total dumpster fire and the system stayed alive, mostlyhttps://en.wikipedia.org/wiki/MIT_Computer_Science_and_Artif...
This is just it: AI, while providing some efficiency gains for the average user, will become simply too powerful. Imagine a superpower that allows you to move objects with your mind. That could be a very nice thing to have for many to have, because you could probably help people with it. That's the attitude many hacker-types take. The problem is, it allows people to also kill instantly, which means that telekinesis would just be too powerful to juxtapose against our animal instincts.
AI is just too powerful – and if more people took a serious stand against it, it might actually be shut down.
The argument that LLMs will enable "super powered" malware and that existing security solutions won't be able to keep up, is completely overblown. I see 0 evidence of this being possible with the current incarnation of "AI" or LLMs.
"Vide coded" malware will be easier to detect if the people creating it don't understand what the code is actually doing and will result in incredible amount of OpSec fails when the malware actually hits the target systems.
I do agree that "vide coding" will accelerate malware development and generally increase the amount of attacks to orgs. However if you're already applying bog-standard security practices like defense in depth, you shouldn't be concerned about this. If anything, you might want to start thinking about SOC automations in order to reduce alert fatigue.
Stay far away from anyone trying to sell you products to defend against "AI enabled malware". As of right now it's 100% snake oil.
Also, this is probably one of the cringiest articles on the subject I've ever read and is only meant to spread FUD.
I do find the banner video extremely entertaining however.
What bothers me the most about this article is that the tools that attackers use to do stuff like find 0days in code are the same tools that defenders can use to find the 0day first and fix it. It's not like offensive tooling is being developed in a vacuum and the world is ending as "armies of script kiddies" will suddenly drain every bank account in the world. Automated defense and code analysis is improving at a similar rate as automated offense.
In this awful article's defense though, I would argue that red team will always have an advantage over blue team because blue team is by definition reactionary. So as tech continues it's exponential advancements, the advantage gap for the top 1% red teamers is likely to scale accordingly.
It will be extremely interesting to see how vulnerability discovery evolves with LLMs but the whole "sky is falling hide your kids" hype cycle is ludicrous.
Somebody should pitch that to YC.
You know, there are some pretty crazy run rates out there.
I'm having trouble reconciling what you wrote here with that result. Also with my own experiences, not necessarily of finding kernel vulnerabilities (I haven't had any need to do that for the last couple years), but of rapidly comprehending and analyzing kernel code (which I do need to do), and realizing how potent that ability would have been on projects 10 years ago.
I think you're wrong about this.
Also if you throw these models at enough code bases, they will probably get lucky a couple times.. So far every claim I have seen didn’t stand up to rigorous scrutiny. People find one bug then inflate their findings and write articles that would make you think they are far more affective than reality and I am tired of this hype.
CURL had to stop accepting bounties after it found nearly all of em were just AI generated nonsense…
Also I stated that they indeed provide very large gains in certain areas. Like writing a fuzz harness and reversing binaries. I am not saying they have absolutely no utility I am simply tired of grifters attempting to inflate their findings for clout. Shit has gotten out of control.