There is a reason why we still have people working at McDonald's even though fully automating it has been possible for a couple of decades now.
https://en.wikipedia.org/wiki/Ice_trade
It was more economical to send people out to cut ice from a lake in Maine and ship it by rail to Chicago than it was to just freeze water from a local supply. It was also more reliable since the technology was mature, versus ice plants that often broke down when meatpackers needed a consistent supply.
There's no reason why this won't be the case for AI unless semiconductor manufacturing continues its exponential performance/cost growth. The demand for technologically obsolete goods and services do not instantly disappear when a superior product enters the market.
Human software engineers right now are more reliable than AIs for most price-points. This is true for most industries in which machine learning is present.
How did you come up with this number? It seems pretty unrealistic.
> There is a reason why we still have people working at McDonald's even though fully automating it has been possible for a couple of decades now.
Maybe the low salary is the reason? If it is a bit more costly to automate certain aspects of manual labor, then the low salaries might remove the incentive to do so. This is not the case for software engineering.
If it costs $1m p/y to run a machine that cooks burgers and fries, or $30k for an employee who can do that _and_ cover something else when someone else is ill, it's a no-brainer. But businesses had to discover that the hard way; until the 80s, most people were still convinced automation would win everywhere, because it had won (and won big) in manufacturing. A combination of factors, from the '80s onwards, made labor costs effectively fall, which created our reality where certain jobs are so cheap that automating them makes no sense.
The "problem" is that, in certain regions, software development costs reached a point where automation looks very, very appealing. If a machine costs 500k p/y to replace a few 150k p/y SWEs without all those pesky employment complications, businesses will happily choose "AWS AI CloudDeveloper"...
https://www.theregister.com/2023/10/11/github_ai_copilot_mic...
"Make it profitable" appears a secondary concern in the AI space.
If 1M context uses 32x the memory of 32k, its a non-starter. Even a smallish LLM like Mixtral uses 4-8gb of memory just for your prompt. You would have 256+GiB at 1M...
I read somewhere that there was a recent breakthrough that enabled this.
Even if it costs a lot to run inference with 1M token context, it is hard to imagine it would cost anywhere close to a software engineer salary.