- Generative AI uses the context you provide to help generate additional tokens
- If the context you provide is bad (low-quality code, riddled with security issues), you'll similarly get low-quality code generated
- If the context you provide is good (high-quality code), you'll get better-quality code out
The thing we wanted to highlight with this research is that security is meaningfully impacted when you're generating code, particularly with low-quality codebases.
If you didn't write it, then it's going to be low quality, insecure code by default.
Personally, the most risky AI stuff I do is if I am completely stuck on something, I might accept AI suggestions without much thought just to see if it can resolve whatever issue I am running into. But in my mind, those parts of the code are always "dirty" until I thoroughly review them; in the vast majority of cases, I end up refactoring those parts myself. If I am asking AI to improve a text I wrote, I rarely just take it as-is, I typically open both versions next to each other and apply parts I like to my original text.
In my opinion, stuff created by AI is inherently "unfinished". I cringe whenever people have AI do something and just roll with it (writing an essay, code, graphic design, etc.). AI is excellent for going most of the way, but in most cases, there need to be review and finishing touches by a human, at least for now.