I ran a similar audit two weeks ago using a different methodology — deterministic quality gates rather than traditional CVE scanning. The interesting finding wasn't the security vulnerabilities (Cisco's 512 CVEs cover that). It was the AI drift patterns underneath: systematic error suppression, silent catch blocks, empty error handlers throughout the codebase. The code scores exceptionally well on structural metrics — clean architecture, good separation of concerns. But the AI agent optimized for 'compiles and passes tests' over 'fails safely.'
That's a pattern I've now seen across multiple AI-generated codebases. Traditional security scanners miss it entirely because it's not a vulnerability — it's a design philosophy baked in by the generation process. Published the full analysis with specific line numbers and commit hashes: [https://medium.com/@erashu212/i-ran-quality-gates-against-op...]