Things built with security in mind are not invulnerable, human written or otherwise.
This applies whether the code is written is by a human or AI, and also whether the code is reviewed by a human or AI.
Is a Github Copilot auto-reviewer going to click two levels deep into the Slack links that are provided as a motivating reference in the user story that led to the PR that's being reviewed? Or read relevant RFCs? (And does it even have permission to do all this?)
And would you even do this, as the code reviewer? Or will you just make sure the code makes sense, is maintainable, and doesn't break the architecture?
This all leads to a conclusion that software engineering isn't getting replaced by AI any time soon. Someone needs to be there to figure out what context is relevant when things go wrong, because they inevitably will.
If a marketer claims something, it is safe to assume the claim is at best 'technically true'. Only if an actual engineer backs the claim it can start to mean something.
so the "reviewing" process will be looking for the needles in the haystack
when you have no understanding, or mental model of how it works, because there isn't one
it's a recipe for disaster for anything other than trivial projects
>"NOOOOOOOO!!!! You can't just use an LLM to write an auth library!"
>"haha gpus go brrr"
(Those lines remain in the readme, even now: https://github.com/cloudflare/workers-oauth-provider?tab=rea...)
> Every line was thoroughly reviewed and cross-referenced with relevant RFCs
The issue in the CVE comes from direct contradiction of the RFC. The RFC says you MUST check redirect uris (and, as anyone who's ever worked with oauth knows, all the functionality around redirect uris is a staple of how oauth works in the first place -- this isn't some obscure edge case). They didn't make a mistake, they simply did not implement this part of the spec.
When they said every line was "thoroughly reviewed" and "cross referenced", yes, they lied.