It does seem like most people completely ignore the obvious harms caused by AI when talking about using LLMs for programming, as though somehow it is disconnected from the other deployments of the technology.
I feel that the people who are completely ignoring the harms are the ones who need and/or benefit from it and do whatever it takes to justify their use of it. The rest are people who understand the harms and minimize interaction followed by the blissfully ignorant.
I was just talking to a content creator who uses AI at work social media platforms to display her personal projects. She talked about how she is fully aware of the harm social media platforms bring while acknowledging they empower her to present her work to the world without gatekeeping. AI allows her to power through boring office tasks but she loathes their use in the art world and replacing people in general.
I would insist that the deployments of a technology should be disconnected from the technology itself - I criticize AI too, and I get a lot of downvotes for it, but I try to separate the science of AI from its economics and politics.
The harms of AI and other technologies come from two sources 1. Capital destroying market bubbles and 2. Deployments motivated and enabled by political and moral corruption.
Both of these are in turn enabled and sustained by legislation. That is, we have to talk politics, not technology and not AI. AI has a great potential - both for improving human life and for making it a lot worse and which way it goes depends entirely on politics.
If we fail to cleanly separate these issues and keep moralizing about technology, we will be chasing red herrings and bumping heads in the dark all the while the tech is being deployed against us.
Which is all fine and dandy. But why play the "You simply don't understand it as well as I do" rather than something more investigative and curious? Just fuels the whole "holier than thou" vibe Drew been trying to increase seemingly every day.
It's a disagreement of opinion, not some "I'm the only smart person who can realize this", which is why it kind of sours the entire piece.
I'll say this from the perspective of a person who publishes content online: because people's revealed preference is for content written this way. You can spend weeks polishing thoughtful, original content that will get few clicks, or you can crank out throwaway op-eds about AI and get thousands of likes and upvotes from people who just wanted to hear their own beliefs explained back to them.
My stuff appeared on HN a couple of times over the years and the less effort I put into it, the better it fared. The temptation to change your writing style and to offer increasingly more provocative and shallow opinions is difficult to resist.
My point is probably this: if you want to see better stuff, I think you gotta stop engaging with articles like this. Patrol /newest and upvote cool in-depth stuff.
That's not the tone of the article; he uses the word "pretending". That tells me that he thinks that people do understand, but they don't want to admit that they understand because that would reveal their values.