Meh. Why would the model makers not be fantastic security vectors? The motivation to not be the company known to "silently slip vulnerabilities in generated code" seems fairly obvious.
People have always been able to slip in errors. I am confused why we assume that a LLM will on average not be better but worse on this front, and I suspect a lot of residual human-bias and copium.