> But here's what I keep wondering: does expanding the scope of the possible eventually erode the deep understanding that makes the expansion valuable in the first place? Like, if you never have to debug a memory leak because the agent handles it, do you lose the intuition that would let you architect systems that don't leak in the first place?
Maybe, but it feels very hard to predict. Neither I nor most engineers I know ~truly~ understands how a computer works at the deepest lowest level. And for those who do, they probably don't understand the deepest lowest levels of chips, and for those who understand that, they probably don't truly understand how those chips are made, and so and so on. Modern life is built on abstractions upon abstractions, and no one can understand it all from the ground up.
My question is whether AI will give us another abstraction on top of what we have, or if it'll just get so smart that it'll do everything, leaving us with no way to contribute (and most likely becoming extinct).