Lately I have been working with IDE-integrated assistants like Jetbrains and Windsurf whose claim to fame is that they can make little changes to existing code bases.
I've been impressed with the ability of these systems to, say, look at liquibase migrations and JooQ stubs and figure out what the CREATE TABLE must have been, or help me use a feature in Postgres I'd never used before and then use it in JooQ.
On the other hand, AI-assisted programming is a lot like working with a junior, there is a lot of "Dude, there's a red squiggle on line 92!"
A basic mental model for a complex task is that it is comprised of N subtasks which each have a probability p of success. In that can the probability of success is
pᴺ
p=0.95 is not that bad but if N=20 we are looking at an 0.36 chance of success. Even an expert programmer will make mistakes with complex SQL queries and the way they know they did it right is by looking at the results of queries and seeing if they make sense.
This essay
https://en.wikipedia.org/wiki/No_Silver_Bullet
talks about the reason why so many attempts to revolutionize programming failed which comes down to: there are five things that burn up time in projects, some startup thinks only three of them matter, they make some improvement in the areas they want to improve, but they didn't remove all of the roadblocks so no revolution! It can be really exhausting to talk with people about these kind of projects because they insist that they want to limit the scope to make a product they can really make, but in so limiting the scope they produce a product that's doomed to fail.
The missing link for AI in the database I think is in testing, recovery, etc. It's already a terrible problem that conventional answers for testing SQL are often bad. Because it might take several minutes to create a realistic database for testing and how you just couldn't always get a database instance when you need it for the cloud people have avoided real integration testing for databases in many projects. So then we wind up writing awful tests using mocks and such instead.
You are not going to get superhuman accuracy out of a SQL agent unless it has a set of test cases to work with. You just aren't. Testing is how we get superhuman performance out of humans.
There is also this issue of recovery and infrastructure protection. If that bot writes a bad UPDATE against your test database that might be 20 minutes to restore from backup (if you've got a big test database) Write it against the production database and it is like a game of snakes and ladders, anything you gained from AI is lost.
Anyhow I have talked to (even worked for) a lot of people who think they can revolutionize programming by choosing a few problems to solve and ignoring the others and people like that are quick to say "you don't really need that..." and maybe sometimes they are right but so far the Ancien Regime still holds.