Which includes this excellent line:
> Unfortunately, the winds of change are sometimes irreversible. The continuing drop in cost of computers has now passed the point at which computers have become cheaper than people. The number of programmers available per computer is shrinking so fast that most computers in the future will have to work at least in part without programmers.
This confused my teacher as he knew this guy wasn’t super technical, and asked him more about it. I may have the details not exactly right but the man said something like “I use lotus notes every day!”
The word programmer had a very different meaning 40 years ago.
Writing software has always been a skill with no ceiling. Writing software can be literally equivalent to doing research level mathematics. It can also be changing colors on a webpage. This is why I have never been worried about LLMs taking software jobs, but it is possible they will require the level of skill to be employable to spike.
Despite all the astounding developments in AI/ML though, I still think there's still a critical need for the application of human/biological imagination and creativity. Sure the amount of leverage between thoughts and CPU cycles can be utterly giant now, but it doesn't seem to diminish the need (where performance or correctness/less-bugs are needed) for a full understanding of what the computer actually gets up to in the end.
For what it's worth, we do have an ML department at RSP and they are doing great! But I'm not sure we'd get very far if we tried to vibe-code the underlying pipeline, as it really requires full understanding of many interlocking pieces.
Democracy is about governance, not access.
A "democratized" LLM would be one in which its users collectively made decisions about how it was managed. Or if the companies that owned LLMs were ran democratically.
It can be about both meanings. The additional meanings of democratize to describe "more accessible" are documented in Oxford and Merriam-Webster dictionaries:
https://www.encyclopedia.com/humanities/dictionaries-thesaur...
https://www.merriam-webster.com/dictionary/democratic#:~:tex...
(Or alternatively, it's getting harder to stamp out "shadow IT" and all the risks and headaches it causes.)
What you say about big tech is true at same time though. I worry about what happens when China takes the lead and no longer feels the need to do open models. First hints already showing - advance access to ds4 only for Chinese hardware makers
The problem was never access barriers, but the fact that people are too lazy to study even a 200-300 pages on something as simple as ruby on rails.
Real democratizing or programming is free access to compilers, SDKs, etc. AI coding does nothing to help that. In fact, it hurts it, because those non-programmers only get access to the AI tools on the terms of the AI companies. Sure they could train their own models, but then we're back to having to learn things.
I see an analog with AI-generated code: the disciplined among us know we are programming and consider error and edge cases, the rest don't.
Will the AIs get good enough so they/we won't have to? Or will people realize they are programming and discipline up?
The Mythical Man Month is just over half a century old, yet still reads like it was written yesterday.
Worse, they were doing functional programming just by chaining formulas without side effects, surpassing the skills of most self-proclaimed programmers out there.
And then of course there's this: https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt#Metho...
> Or will people realize they are programming and discipline up?
Well, they apparently haven't with spreadsheets, 50 years on, so I wouldn't be optimistic.
Or will there be coding across disciplines, and attendant theories of literacies in context?
What I like about the OP is the consonance with literate practices, which has gone through similar generations of "our children don't know how to [...]" alongside of "our children will not need to [...] because of the machines."
Somehow civilization continues to function!
Makes me a bit less terrified that untested vibe coded slop will sink the economy. It's not that different from how things work already.
The difference is those spreadsheets were buried on a company internal fileshare and the blast radius would be contained to that organization.
Today vibe coders can type a prompt, click a button, and their thing is exposed directly to the internet and ready to suck up any data someone uploads.
(This paper was extremely influential in pushing austerity policies of questionable efficacy during the financial crisis.)
We don't want _more_ of this.
Citation needed, I think.
Using excel in the traditional sense isn't the same as programming. Unless they were doing some VBA or something like that which the vast majority of excel/spreadsheet users don't.
> spreadsheet formulae
formulas. We aren't speaking latin here.
> I see an analog with AI-generated code: the disciplined among us know we are programming and consider error and edge cases, the rest don't.
Programming isn't really about edge cases or errors.
Define "here", please! Perhaps your "here" and mine differ, but the view from my here is that while all three plurals are generally acceptable, formulae is the correcter double plus good spelling for this context.
To ignore that pattern and say everything's going to be automated and humanity will be irrelevant seems to me to be... more of a death wish against human agency, than a prediction based on reality.
The difference this time is that the thing they're trying to automate is intelligence. The goal is a machine that's as smart as a Nobel Prize winner or a good CEO, across all fields of human intellectual endeavor, and which works for dollars an hour. The goal is also for this machine to be infinitely copyable for the cost of some GPUs and hard drives.
The next goal after that will be to give that machine hands, so that it can do any physical labor or troubleshooting a human can do. And again, the goal is for the hands to be cheaper to produce and cheaper to automate than humans.
You may ask yourself, who would need humans in a future where all intellectual and physical tasks can be done better and cheaper by a machine? You may also ask yourself, who would control the machines? You may ask yourself, what leverage would ordinary humans have in a future that no longer needed them for anything? Or perhaps you would not ask those questions.
But this is the future investors are dreaming of, and the future that they're investing trillions of dollars to reach. That's the dream.
I believe that full automation of the mundanities of human life is coming in the fullness of time. But for that insight to be helpful to me, I have to get the timing right, and the data suggests I should be extremely skeptical about excitable tech guys predicting big things in short time frames.
Part of me thinks that we're already reaching peak stuff/employment/the current system.
We are currently churning out graduates who work in coffee shops. More and more employment is make work. The issue is can we carry on requiring work, making it a moral requirement.
I suspect it'll be like the industrial revolution, when the average labourer moved to a factory in the city living in a slum, they were worse off. It took time for the conditions of the working class to improve.
Basic income is touted as the solution, but then globalisation means workers are moving much more and I'm not sure the 2 are compatible. Not that I have a better idea.
I do think we need a cultural change decoupling work from self worth. It's becoming less and less defensible to require everyone to work to be 'deserving'.
All that being said, there will still be jobs, there will always be demand for hand made, or something that isn't soulless corporatism. Although I'm starting to sound like Star Treks view of the future, which may not achievable
Fundamentally unhinged? How presumptive of you to declare with confidence AI will never become more capable than humans.
But I suppose it's fitting. If after all that has happened your priors still have not budged then I'm sorry to say you will probably never understand this.
> I should be extremely skeptical about excitable tech guys predicting big things in short time frames.
Edit: I read your other comment. I don't disagree with you here.
I have certainly never met anyone who works in "loom engineering" in my entire life.
All the sheets we saw in that factory, and in our hotels, were noticeably thicker and stiffer than American sheets, somewhere between American sheets and denim. When we asked about that, they seemed to feel sorry that we only had thin, flimsy sheets.
Will something come along some day that will actually drastically reduce the need for programmers/developers/software engineers? Maybe. Are we there yet? My LLM experience makes me seriously doubt it.
A couple of years later, Microsoft came out with Visual Basic, and I thought, OMG, I'm toast. Secretaries are going to be writing code. I was a developer by this time, writing code in FoxPro and getting into PowerBuilder.
All this to say, "I've been in IT for many years, and companies promise a lot but rarely deliver completely on their promises." Do programmers and others in the tech field need to adapt? Yes. Is AI going to be disruptive to some extent? Yes. Are all jobs going away? No.
Back in the 80's there were ads for tools to "dinosaurs" who everyone looked to when their 4GL language failed to solve the problem.
Don’t facilitate losing your job.
Two years ago, one former exec at my place was perfectly happy to throw resources ( his word ) from India at a problem, while unwilling to pay the vendor for the same thing. I voiced my objection once, but after it was dismissed I just watched the thing blow up.
I am not saying current situation is the same. It is not. But, it is the same hubris, which means miscalculations will happen ( like with Dorsey's Block mass firing ).
Now I've come to realize the error in my ways, this is probably not going to happen. What will happen is instead is that the ones doing the "shuffling of shit" is just going to also be agents themselves. Prompted by a more senior slop-grammer specialized in orchestrating "shuffling of shit".
The only reason I remember this encounter so clearly was because he got rather annoyed, to the point of being aggressive, when I pointed out that most of the computing landscape was built on C and this wasn’t going to change any time soon.
Multiple decades later, and C-derived languages still rule the world. I do sometimes wonder if his opinion mellowed with time.
The reasons vary, but in general, just as some people don't want to touch maths (even if they might be good at it if they tried), some people loathe the very idea of being technical, either because they think it is beneath them, or they just don't see themselves that way.
And like the article explains, even when "programming" tools seem to become simpler to use, they still require technical specification, and once people feel like they are getting close to "programming" they check out.
But this is exactly what my comment is saying? Not sure what you added that was not already in my comment, you are agreeing with me.
Arguably programming is as much learning as it is writing code. This is part of the reason some people copy an entire API and don't realise they're not so much building useful code as building an understanding.
To them, an LLM is indistinguishable from a programmer. From the point of view of authority, progress happens one meeting at a time. The reality is that there is a pyramid of experts beneath the authorities, that keep everything running smoothly, in spite of the best attempts of the authorities to demolish the foundation of the pyramid by "helping".
EDIT: to end on a positive note, it does not have to be this way. We just have to be willing to understand _how_ the organization we are a part of actually functions. And that means actually being curious instead of merely authoritative. I understand that curiosity is hard to maintain when you swim with sharks, so maybe don't swim with sharks.
Some do, but not all did that. With work-from-home I was staying at a friend of a friend that was a programmer and had a meeting. I was amazed about the level of simple things he was discussing, like "please add an error check, now you made a form where you can insert wrong data; make sure form is visible on a small screen, now it is not; etc.". And they talked ~2 hours about each, with the other person showing the guy exactly what it was not working. I do not know the history (maybe he was reconverting or something), but if this was what was he was doing usually, it was very inefficient and quite simple.
Most of my work was more similar to what you describe (fighting vague instructions and push back unfeasible ideas), but I wonder how much of "the industry" does this.
On the management I worked with all kinds, the employees have a small part of responsibility to look and select good organizations, otherwise power-hungry idiots arrive on top and start dictate and nothing crumbles because everybody just stays.
I once worked in a big bank where basically 70% of their C-suite and upper management were engineers (not only SE, but civil, electric, naval, etc) who had years of experience in IT. The rest were lawyers and a couple economists.
And I am not even talking about the worlds of quant and hft.
Only issue I saw after a month of building something complex from scratch with Opus 4.6 is poor adherence to high-level design principles and consistency. This can be solved with expert guardrails, I believe.
It won’t be long before AI employees are going to join daily standup and deliver work alongside the team with other users in the org not even realizing or caring that it’s an AI “staff member”.
It won’t be much longer after that when they will start to tech lead those same teams.
In my opinion, attempting to hold the hand of the LLM via prompts in English for the 'last mile' to production ready code runs into the fundamental problem of ambiguity of natural languages.
From my experience, those developers that believe LLMs are good enough for production are either building systems that are not critical (e.g. 80% is correct enough), or they do not have the experience to be able to detect how LLM generated code would fail in production beyond the 'happy path'.
Which is _always_ the case with these things, honestly. Remember Ruby on Rails? Make a Twitter clone in half an hour by just writing some DSL! Of course, in reality Rails was _not_ a productivity revolution, and making _real_ software which had to be operated at scale and maintained, and work properly, in it wasn't much easier than it had been previously.
For someone who is able to design an end to end system by themselves these tools offer a big time saving, but they come with dangers too.
Yesterday I had a mid dev in my team proudly present a Web tool he "wrote" in python (to be run on local host) that runs kubectl in the background and presents things like versions of images running in various namespaces etc. It looked very slick, I can already imagine the product managers asking for it to be put on the network.
So what's the problem? For one, no threading whatsoever, no auth, all queries run in a single thread and on and on. A maintenance nightmare waiting to happen. That is a risk of a person that knows something, but not enough building tools by themselves.
Over two more weeks I can work with those same five to ten people (who often disagree or have different goals) and get a first draft of a feature or small, targeted product together. In those latter two weeks, writing code isn’t what takes time; working through what people think they mean verses what they are actually saying, mediating one group of them to another when they disagree (or mostly agree) is the work. And then, after that, we introduce a customer. Along the way I learn to become something of an expert in whatever the thing is and continue to grow the product, handing chunks of responsibility to other developers at which point it turns into a real thing.
I work with AI tooling and leverage AI as part of products, where it makes sense. There are parts of this cycle where it is helpful and time saving, but it certainly can’t replace me. It can speed up coding in the first version but, today, I end up going back and rewriting chunks and, so far, that eats up the wins. The middle bit it clearly can’t do, and even at the end when changes are more directed it tends toward weirdly complicated solutions that aren’t really practical.
That’s a bit… handwavy…!
I recall Power builder in particular it was the rage.
- Software engineering is a cost center, they are middlemen between the C-level ideas and a finished product.
- Software engineering is about figuring out how to automate a problem, exploring the domain, defining context, tradeoffs, and unlocking new capabilities in the process
``` My own eyes spent countless nights observing, with curiosity and wonder and delight, the responses of a computer, as I commanded it with code, like a sorcerer casting spells. I could not have known, that this obedient machine, this silicon golem, was also, slowly and imperceptibly, enchanting me, and changing how my eyes would see.
At the time^21 , I was a mere fifteen years old, young enough, so that the gravity of life was weak enough, and the mind nimble enough, to allow me to explore without any material justification.
The computer was the believed and I was the believer.
A consequence of becoming obsessed^22 with computer programming, is that one starts to see new metaphors, algorithmic metaphors, everywhere one looks. This new metaphorical lense, belongs entirely to the third eye. Without this lense, I would look at a traffic jam, and see a traffic jam. With the lense, I would look at a traffic jam, and wonder if, and to what extent, the latency-throughput trade-off^23 was true for highways. Without the lense, I would read about social theory, and simply see the words. With the lense, I would ask if society was, a tree^24 , a graph^25, a tree of graphs, or a graph of trees^26.
To generalize, the computer programmer looks at something, and asks, _is this thing an algorithm, and if so, what kind_ ? The entire _trade_ of com-puter programming, it revolves around this question, around the discovery of metaphors that fit^27[13][14].
It is thus little surprise, when a computer programmer asks if (or sometimes asserts that) a certain kind of algorithm^28 is intelligence^29 , consciousness, or both.
The entire ritual of computer programming, is similar to the trade, in that it involves discovering metaphors, not as a means to an end, but as their own end. This ritual is difficult to explain to someone who has never practiced it. Imagine, instead of trying to find metaphors that bridge the real to the algorithmic, one tries to find metaphors that bridge the algorithmic to itself.
It is very similar to what mathematicians do, but it requires writing programs in a very principled and abstract way^30 .
This ritual, unlike the ritual of writing, and unlike the ritual of mathemat-ics, has a dominant material component (the computer) which can make your code, in addition to an _imaginary_ experience, a _material_ experience^31 . This makes the computer a medium — an artificial oracle or artificial hallucinogen — that can safely imagine the unimaginable. And like the oracle, the computer exists to provide insight^32.
Without the ritual of programming, there would be no field of chaos the-ory, nor complex systems (very important for economics and environmental sciences), and _certainly_ no elaborate fractals. Pure mathematics could only scratch the surface, because the mathematical ideas, of the mid 20th century, that our imaginations could access, were insufficient for exploring these sys-tems. Computers allow us, not unlike microscopes and telescopes, to magnify the informational dimension of nature [17].
Computers, and the arcane programming languages that make them obey, are magic machines, that created a new interaction between, two elements of the human psychic triad, the immaterial and material.
What is this triad, and what is its third element? The concept of the triad appears so frequently, in recorded human thought, and in the structure of language, that it is either some kind of adaptive ideal^33 , or a consequence of language itself^34, if not both. Pythagoras called _three_ perfection itself. Plato divided the world into three parts. And, even today, our modern shamans and sages, use triads to discuss the universe.
Roger Penrose has a traid consisting of physical, platonic, and mind. Lacan has a triad consisting of real, symbolic, and imaginary. Plato has a triad of good, truth, and beauty. Of the three, Lacan’s naming is the most self-explanatory.
In this essay, the _material_ is the real, and the _immaterial_ is the other two.
The _trade_ of programming is driven by the _real_, while the _ritual_ of pro-gramming is driven by the _imaginary_. A trade is pursued because of real, material concerns (such as covering the cost of living), while a ritual is pur-sued because of imaginary concerns — concerns that can, more precisely, be called _aesthetic_. ```
> "The name derived from the idea that The Last One was the last program that would ever need writing, as it could be used to generate all subsequent software."
That was released in 1981. Spoiler alert: it was not, in fact, the last one.
There are bookoo other things people could be doing besides coding YASW [Yet Another Stupid Website].
You could argue that coding with LLM's is a form of software reuse, that removes some of its disadvantages.
The article is a good summary of major movements through the decades without so much that whole point is lost in the details. I would have put in a slightly different set of things if I wanted to write that article, but the point would still stand and I would leave out many things that could be put in but would be too much noise.
But, the modal programmer at this point is some person who attended a front-end coding bootcamp for a few months and basically just knows how to chain together CSS selectors and React components. I do think these people are in big trouble.
So, while the core, say, 10% of people I think should remain in the system. This 90% periphery of pretty bad programmers will probably need to move on to other jobs.
Surprisingly, a lot of times programmers are better bring in business experience from other organizations that the business people at the current one don't possess.
Just one example of how this has happened again and again.
Every recession where there was mass lay-offs on programmers (not every recession hits programmers hard), there were many articles saying that whatever that latest thing [see article] was the cause of this and industry is getting rid of programmers they will never need again.
In every case of course "it is the economy stupid". The tools made little difference in the need for programmers. The tools that worked actually increased the need because things you wouldn't even attempt without the tools were now worth hiring extra people to do.
LLMs seem to be a significant step forward in converting language to code, but they don't seem to engage self-directed abstraction.
If you are a good engineer you can dictate data structures to it too. It then performs even better.
I believe the writing is on the wall a this point, it does a very adequate job if I invest enough time in writing and refining the specs and give it the data structures (&/| database schemas) I want it to use. And there is no comparison in the number of hours I spend wrangling it and the number of the hours it would take me to do the code myself.
This is the worst it's going to be and it's already quite good, it wasn't that good a mere three months ago.
The main pitfall is trying to get an LLM to read your mind, in doing so you are putting too much load on whatever passes for their intelligence quotient. That isn't how you get good results or get a good measure of their capabilities.
Uber is 2010: this is the worst it's ever going to be.
There's some triumphalism here. What happens when training data becomes scarcer because open source as a paradigm was killed? What happens when investor cash flows elsewhere and training and inference need to become profitable on their own?
Meanwhile, search was better in the past and is at this point the best it's going to be.
Enshittification comes for all things.
Of course, in the end, it won't do us humans any good, because when the Singularity AKA Rapture comes, we'll all be converted to Computronium. :-)
Seems like that little kerfuffle with all the 2-digit years in legacy COBOL code was a well-timed distraction.
My own programming career began in 1970, and included an intriguing visit to Stanford's AI center. I've been an avid fan boi of AI ever since. Currently the folks I'm watching are the Thousand Brains Project, an effort to replicate a network of cortical columns and support services.
In any event, I've written code in a variety of languages. My current favorite is Elixir, because it offers fewer booby traps and far more capability than any others I've seen. However, my take on languages is that they let you work at a comfortable level of abstraction.
Still, I'm delighted to have an earnest pair of assistants (ChatGPT and Codex CLI) who are able to discuss ideas and issues in my native language. They can also help me translate my rough notes and speculations into (apparently) working code, using whatever resources are available (eg, languages, libraries). Indeed, one of the hardest things to do is to imagine what they are able to do.
I suspect that there are at least a few nerdy nine year olds who are grabbing onto this. I also suspect that there are folks who will do unfortunate things with this capability. Should be interesting...
All the other attempts failed because they were just mindless conversions of formal languages to formal languages. Basically glorified compilers. Either the formal language wasn't capable enough to express all situations, or it was capable and thus it was as complex as the one thing it was designed to replace.
AI is different. You tell it in natural language, which can be ambiguous and not cover all the bases. And people are familiar with natural language. And it can fill in the missing details and disambiguate the others.
This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it. Now the AI takes the place of the engineer.
Also, I personally never believed before AI that programming will disappear, so the argument that "this has been hyped before" doesn't touch my soul.
I have no idea why this is so hard to understand. I'd like people to reply to me in addition to downvoting.
This is just categorically false.
No-code tools didn't fail because they were "mindless conversions of formal languages to formal languages". They failed because the people who were supposed to benefit the most (non-developers) neither had the time nor desire to build stuff in the first place.
How far AI will succeed in replacing programmers remains to be seen. Personally I think many jobs will disappear, especially in the largest domains (web). But I think this will only be a fraction and not a majority. For now, AI is simply most useful when paired with a programmer.
This is not the case:
- Before the 90s, programming was rather a job for people who were insanely passionate about technology, and working as a programmer was not that well-regarded (so no "growing opportunities").
- After the burst of the first dotcom bubble, a lot of programmers were unemployed.
- Every older programmer can tell you how fast the skills that they have can become and became irrelevant.
Over the last decade, the stability and opportunities for programmers was more like a series of boom-bust cycles.
Experienced through old-school (pre-LLM) practice.
I don't clearly see a good endgame for this.
Does he know about SQL injection? XSS?
Maybe he knows slightly about security stuffs and asks the LLM to make a secure site with all the protection needed. But how the manager knows it works at all? If you figure out there's a issue with your critical part of the software, after your users data are stolen, how bad the fallback is going to be?
How good a tool is also depends on who's using it. Managers are not engineers obviously unless he was an engineer before becoming a manager, but you are saying engineers are not needed. So, where's the engineer manager is going to come from? I'm sure we're not growing them in some engineering trees
In the real world, the materials are visible so people have a partial understanding on how it gets done. But most of the software world is invisible and has no material constraints other than the hardware (you can't use RAM that is not there). If the hardware is like a blank canvas, a standard web framework is like a draw by the numbers book (but one with lines drawn by a pencil so you can erase it easily). Asking the user to code with LLM is like asking a blind to draw the Mona Lisa with a brick.
Paradoxically this may mean there are more jobs for programmer and programmer-likes alike as new cottage industries are born. AI for dentists is coming.
Are you suggesting “And Claude, make no mistakes” works?
Because otherwise you need an expert operating the thing. Yes, it can answer questions, but you need to know what exactly to ask.
> This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it
I have yet to see vibe coding work like this. Even expert devs with LLMs get incorrect output. Anytime you have to correct your prompt, that’s why your argument fails.
And while these tools can be invaluable in some cases, I still don't know how we get from "Hazy requirements where the user doesn't know what they even want" to "Production-ready apps built at the finger-tips of the PM".
Another really important detail people keep missing is that we have to make thousands of micro-decisions along the way to build up a cohesive experience to the user. LLM's haven't really shown they're great at not building assumptions into code. In fact, they're really bad at it.
Lastly, do people not realize how easy it to so convince an LLM of something that isn't true or vice versa? i love these tools but even I find myself trying to steer it into the direction that makes sense to me, not the direction that makes sense generally.
> The Eternal Promise: A History of Attempts at Manned Flight
Anyone banking that this technology isn't going to decrease the demand for programmers, or that it's going to offset the lost jobs with new, related ones ("prompt engineer") are kidding themselves. And frankly, it's probably a good thing, at some level, in that there are far too many people in tech with no business being there. How many "developers" are effectively just gluing bits of JavaScript together and juggling NPM dependencies, all day? And how many of them can't even accomplish this feat without a steady supply of Adderall? Ironically, these people seem to be some of the most enthusiastic AI adopters, so it's in some ways fitting that they'll likely be the first to be made redundant.
I don't take issue with this, except that it's a false comfort when when you consider the demand will naturally ebb and individual workload will naturally escalate. In that light, I find it downright dishonest because the rewards for attaining deep knowledge will continue to evaporate; necessitating AI-assistance.
The reason is it different this time around is because the capabilities of LLMs have incentivized the professional class to betray the institutions that enabled their specializations. I am talking about the amazing minds at Adobe, Figma, and the FAANGS who are bridging agentic reasoners and diffusion models with domain-specific needs of their respective professional users.
Humans are class of beings, and the humans accelerating the advance of AI in creative tools are the reason that things are different this time. We have class traitors among us this time, and they're "just doing their jobs". For most, willful disbelief isn't even a factor. They think they're helping while each PR just brings them closer to unemployment.
The only thing that we can do is to not make it worth their time in the long run. Don't let greed and fear slide. Don't hate someone for choosing their family and comfort over your own, hate the system that forces them to make that choice. Hold them accountable, but attack the system, instead of its hostages and victims.
Where you fall depends on where you work and what you work on.
You make a great points about the chain of accountability. But, in my opinion, working professionals are the only agents in the system with the potential to realize their own culpability and divert their actions.
Perhaps, it isn't fair to point to them and call them traitors. Still, they are the only ones with enough agency to potentially organize and collectively push for the kind of ethics that could save us all.
Those massively and widely available benefits will continue to deflate the value of human intelligence until even most of innovators currently working on them lose their seats at the table too.