That said I agree with all your points too: some version of this argument will apply to most white collar jobs now. I just think this is less clear to the general population and it’s much more of a touchy emotional subject, in certain circles. Although I suppose there may be a point to be made about being more slightly cautious about introducing AI at the high school level, versus college.
No, it's not.
Nothing around AI past the next few months to a year is clear right now.
It's very, very possible that within the next year or two, the bottom falls out of the market for mainstream/commercial LLM services, and then all the Copilot and Claude Code and similar services are going to dry up and blow away. Naturally, that doesn't mean that no one will be using LLMs for coding, given the number of people who have reported their productivity increasing—but it means there won't be a guarantee that, for instance, VS Code will have a first-party integrated solution for it, and that's a must-have for many larger coding shops.
None of that is certain, of course! That's the whole point: we don't know what's coming.
My point is, I can get somewhat-useful ai model running at slow-but-usable speed on a random desktop I had lying around since 2024. Barring nuclear war there’s just no way that AI won’t be at least _somewhat_ beneficial to the average dev. All the AI companies could vanish tomorrow and you’d still have a bunch of inference-as-a-service shops appearing in places where electricity is borderline free, like Straya when the sun is out.
Yes, you, a hobbyist, can make that work, and keep being useful for the foreseeable future. I don't doubt that.
But either a majority or large plurality of programmers work in some kind of large institution where they don't have full control over the tools they use. Some percentage of those will never even be allowed to use LLM coding tools, because they're not working in tech and their bosses are in the portion of the non-tech public that thinks "AI" is scary, rather than the portion that thinks it's magic. (Or, their bosses have actually done some research, and don't want to risk handing their internal code over to LLMs to train on—whether they're actually doing that now or not, the chances that they won't in future approach nil.)
And even those who might not be outright forbidden to use such tools for specific reasons like the above will never be able to get authorization to use them on their company workstations, because they're not approved tools, because they require a subscription the company won't pay for, because etc etc.
So saying that clearly coding with LLM assistance is the future and it would be irresponsible not to teach current CS students how to code like that is patently false. It is a possible future, but the volatility in the AI space right now is much, much too high to be able to predict just what the future will bring.
The genie is out of the bottle, never going back
It's a fantasy to think it will "dry up" and go away
Some other guarantees over the next few years we can make based on history: AI will get batter, faster, and more efficient like everything else in CS
Plenty of tech becomes exploitative (or more exploitative).
I don't know if you noticed but 80% of LLM improvements are actually procedural now: it's the software around them improving, not the core LLMs.
Plus LLMs have huge potential for being exploitative. 10x what Google Search could do for ads.
Show me actual studies that clearly demonstrate that not only does using an LLM code assistant help make code faster in the short term, it doesn't waste all that extra benefit by being that much harder to maintain in the long term.
I think that Microsoft will not be willing to operate Copilot for free in perpetuity.
I think that there has not yet been any meaningful large-scale study showing that it improves performance overall, and there have been some studies showing that it does the opposite, despite individuals' feeling that it helps them.
I think that a lot of the hype around AI is that it is going to get better, and if it becomes prohibitively expensive for it to do that (ie, training), and there's no proof that it's helping, and keeping the subscriptions going is a constant money drain, and there's no more drumbeat of "everything must become AI immediately and forever", more and more institutions are going to start dropping it.
I think that if the only programmers who are using LLMs to aid their coding are hobbyists, independent contractors, or in small shops where they get to fully dictate their own setups, that's a small enough segment of the programming market that we can say it won't help students to learn that way, because they won't be allowed to code that way in a "real job".
In spite of obvious contradictory signals about quality, we embrace the magical thinking that these tools operate in a realm of ontology and logic. We disregard the null hypothesis, in which they are more mad-libbing plagiarism machines which we've deployed against our own minds. Put more tritely: We have met the Genie, and the Genie is Us. The LLM is just another wish fulfilled with calamitous second-order effects.
Though enjoyable as fiction, I can't really picture a Butlerian Jihad where humanity attempts some religious purge of AI methods. It's easier for me to imagine the opposite, where the majority purges the heretics who would question their saints of reduced effort.
So, I don't see LLMs going away unless you believe we're in some kind of Peak Compute transition, which is pretty catastrophic thinking. I.e. some kind of techno/industrial/societal collapse where the state of the art stops moving forward and instead retreats. I suppose someone could believe in that outcome, if they lean hard into the idea that the continued use of LLMs will incapacitate us?
Even if LLM/AI concepts plateau, I tend to think we'll somehow continue with hardware scaling. That means they will become commoditized and able to run locally on consumer-level equipment. In the long run, it won't require a financial bubble or dedicated powerplants to run, nor be limited to priests in high towers. It will be pervasive like wireless ear buds or microwave ovens, rather than an embodiment of capital investment.
The pragmatic way I see LLMs _not_ sticking around is where AI researchers figure out some better approach. Then, LLMs would simply be left behind as historical curiosities.
That's not going to happen. It's already too late to consider that a realistic possibility.
That's true, but you can't use AI in coding effectively if you don't know how to code. The risk is that students will complete an undergraduate CS degree, become very proficient in using AI, but won't know how to write for loop on their own. Which means they'll be helpless to interpret AI's output or to jump in when the AI produces suboptimal results.
My take: learning to use AI is not hard. They can do that on their own. Learning programming is hard, and relying on AI will only make it harder.
Depends on what your definition of "hard" is - I routinely come across engineers who are frustrated that "AI" hallucinates. Humans can detect hallucinations and I have specific process to detect and address them. I wouldn't call those processes easy - I would say it's as hard as learning how to do integration by summing.
> but you can't use AI in coding effectively if you don't know how to code
Depends on the LLM. I have a fine-tuned version of Qwen3-Coder where if you ask it to show you to compare to strings in C/C++, it will but then it will also suggest you look at a version that takes unicode into account.
I have stumbled across very few software engineers who even know what unicode codepoints are and why legacy ASCII string comparison fails.
> but won't know how to write for loop on their own. Which means they'll be helpless to interpret AI's output or to jump in when the AI produces suboptimal results
That's a very large logical jump. If we went back 20 years, you might come across professors and practising engineers who were losing sleep that languages like C/C++ were abstracting the hardware so much that you could just write for loops and be helpless to understand how those for loops were causing needless CPU wait cycles by blocking the cache line.
My students don't seem to have a problem using AI: it's quite adequate to the task of completing their homework for them. I therefore don't feel a need to complete my buzzword bingo by promoting an "AI-first classroom." The concern is what they'll do when they find problems more challenging than their homework.
> I have stumbled across very few software engineers who even know what unicode codepoints are and why legacy ASCII string comparison fails.
You are proving my point. If the programmer doesn't know what Unicode is, then the AI's helpful suggestion is likely to be ignored. You need to know enough to be able to make sense of the AI beyond a superficial measure.
> That's a very large logical jump. If we went back 20 years, you might come across professors and practising engineers who were losing sleep that languages like C/C++ were abstracting the hardware so much that you could just write for loops and be helpless to understand how those for loops were causing needless CPU wait cycles by blocking the cache line.
We still teach that stuff. Being an engineer requires understand the whole machine. I'm not talking about mid-level marketroids who are excited that Claude can turn their Excel sheets into PowerPoints. I'm talking about actual engineers who take responsibility for their code. For every helpful suggestion that AI makes, it botches something else. When the AI gives up, where do you turn?