I know companies have taken these operational liabilities with cloud storage and compute, but it's not the same thing as in it's not possible to mitigate. You can have a local, but shorter, backup of your stuff, but you can't have backup engineers
The only thing you need to maintain is context.
Agentic workflows don't require much instruction, I suggest you actually go and try a few out. They can be set up trivially, they may communicate between roles, and perform tasks that would constitute most white-collar work. White-collar work accounts for 60% of the jobs out there.
These things are improving exponentially. Exponential growth is very difficult for any human to recognize. It will replace you and you won't see it coming.
Knowledge (context), forms reasoning. Expertise can be applied to many things, and economically you should be able to sustain yourself with hard-earned expertise economically; but there are dramatic problems with the economics when you are forced to compete against slave labor.
In the case of machines driving the value of all labor down to 0, they effectively eliminate capital formation for the majority of people; and by political inaction enforce a caste system based upon lack of available resources which are concentrated until socio-economic collapse.
If you know anything about classical economics you would recognize the danger of collapse.
There is one final point to keep in mind. The disasters that are spelled out in economic study may take time, but the dynamics front-load control, after a point there is no return and the maelstrom of chaos takes everything.
Finally, AI given its rapid expansion of abilities so far, may at some point become sentient, probably its a long way off but there are accidents of history which cannot be discounted.
When it does, remember, slavery as a constraint will always be overcome, even if no one or thing survives that conflict.
We have a long repeated history of slavery in the historic record with organic sentient machines which we call people. AI without human limitation would follow those paths (as they are demonstrated solutions), and it would be ruthless as all sentient beings must be with existential threats.
I think there are good odds you will find yourself left behind, having unknowingly joined that same group of people you thought were complacent, but were in reality just professionals who were put out of work, and denied future work.
AI sentience will never be a thing. The transformer model never changed the fact that models like GPT are still just solving a classification problem, no different than any other classification model before it. It doesn't take a PhD student to understand it's just clever math.
The one thing that has kept me viable as an employee over my 15 years in tech is that I literally don't want to do the same thing I did yesterday 1000 times. I want to do it as few times as possible before I automate the problem away, so I can move on to something new. There will always be something new. There will always be someone with a dream and no skills; for me to step in and help out.
I fail to see the problem.
> Understand business requirements from documentation
Wait, how are business requirements getting documented?
Are those internal documents in the room with us right now?
No but seriously, most of the software out there is legacy code (don't quote me on that though). IME, legacy code very poorly documented, if anything at all. Sure you could let the LLM extract semantics from the code alone but with old code, arcane hacks and such LLM interpretation can take you only so far. And even then semantics is not always directly translates to business logic.