So until this settles, I don't trust it much not because of itself but because of the people who are milking it at the moment.
- It costs more jobs than it creates.
- It's the new meme tech, ala SAAS, Cloud, etc. that I have to tolerate now. I love seeing a "Chat with bing!" button, that's great.
- It's flush with cash due to the US having tonnes of play money to throw around, enabling irresponsible behavior such as pricing below cost. Vast sums of money are burned on this boondoggle around the world annually to achieve middling results. The new Bing AI assistant does not impress for example.
- It's an unsolved ethical problem. Sampling in music requires attribution, comparatively.
- It drastically exacerbates the accountability problem in an increasingly automated world with issues around accountability already. Look at all the threads here about people getting screwed by Google with no recourse?
- It lowers the barrier to entry for bad actors to gain legitimacy. Good actors never needed great art or anything else AI can do if they made things with care and skills they had.
- The world didn't need more content. It is awash with content already. A lot of the content we have now is not good and AI isn't going to make it magically better.
- AI as implemented specifically targets jobs that didn't represent real blocking problems for humanity. It doesn't purify water, eliminate filthy, backbreaking or intense and repetitive near-slave labor or anything else at this point, it came out of the desire of Elon and friends to take Shutterstock's annual profits. This is an ignoble goal. The task of automating the worst labor safely and reliably remains extremely challenging even when AI is involved.
A better question is what problems does AI really solve? Are those benefits worth the massive cost?
When I see something and I know that it was created with child labor, it induces the same disgust that AI products do. Perhaps I can do great and good things with some tool or product made with child labor, but that doesn't change the ethical abomination at the core of that product.
If AI isn't paired with UBI, then we are simply on a collision course for the elimination of tens of thousands of, admittedly awful, jobs. What are all those people going to do? Truck drivers, petty artists, call center workers, etc. We don't have Star Trek style replicators yet, and we have not uniformly evolved as a people to believe in a robust set of rights for our fellow man.
I understand why capitalism has forced this situation to happen, but it is incumbent on governments to aggressively protect their citizens and workers from the AI menace.
I know human nature, and that is what is concerning about AI.
Right now, most people have lived in a world where LLMs didn't exist. They know not to trust them as an oracle of truth. They know they are flawed and have counter-examples to back up any claim that responses are incorrect. They will second-guess the results, and may consider that the results could biased based on a flawed or limited data set.
If LLMs stick around, this situation may be different. As technology matures, it gets incorporated into living and thoughts of its users. Distrust transitions into trust given exposure and time. People forget the basis (and of the limitations inherent to) the underpinnings of the tech they use all the time.
That's what terrifies me: a world where people stop actively thinking and delegate decision-making to a language model built from flawed data. It's just so convenient to delegate. Not questioning is so much easier than questioning.
"AI" OTOH does not exist unless one adopts a strange definition of "intelligence".
An intelligent person can tell us how they arrived at a conclusion. "AI" cannot.
That's a massive timekill to (temporarily) "fix" when the conclusion is wrong. The process is a black box.
Even with old search techniques one can understand the process used to arrive at the results. When results are not what we want, we can understand why.
Just a feel.
Perhaps based on different priors, certainly different 'lived' experiences. Or prions. Why do yours fire that way? Can you explain it, meatpuppet? You only exist in my head, after all, as does everything. Unless there are absolutes, but they after all are givens, and if indeed you are real and so am I, then in what scope is our realness and how do we communicate that knowing, or feeling depending on truths, if not with a string of mutually understandable covenance, in this case language.
As such, the predictions that AI will "replace all artists" is obviously way overblown. At best, it will be a helpful tool, along the lines of Photoshop or After Effects.
The leap from zero to any positive number is an infinite improvement, but there is a big difference between 2 and 2e6.
Too old for this shit.
(yes, that's a personal problem and doesn't relate to the merit of the technology)
I always thought the term "artificial intelligence" had a sort of disabling effect, like there is an intelligence outside of ourselves that serves to drive us in X direction, good or bad. "Technological progress" implies we are the ones driving the changes and the problems they will invariably bring. We sort of grasp this tech will cause profound impacts on society of some vague quality, enough to leave an "ethics" section in every white paper that comes with freely-distributed code and instructions for use, but continue plowing on regardless of what they could possibly be. How sustainable is this? Will there ever come a time when uploading code or even papers to GitHub for anyone to consume become taboo from the stigma and change that's been inflicted on ourselves?
I think the inflection point for those problems creeping into society at a visible everyday level is on a much quicker time scale than AGI. Sometimes I think it's like equipping people with pistols that shoot precision-guided homing bullets - not so much on the scale of a civilization-ending scenario, but it changes the game in its own significant ways. Look at comments accusing others of using ChatGPT to write their responses for them. I think most tech can cause these effects and it's worth questioning what it's meant to accomplish as they're created or used.
At times I wonder if the end stage of any given intelligent civilization is to delegate all parts of its thought process to technology that can be engineered to be superior, with all consequences that entails, because there's no point to being stuck with the tech that is already there forever. The thought that scares me the most is that the revolution might not be directed by governments or angry anarchists, but indirectly, by bored machine learning engineers sitting in their rooms contributing just one more paper or PyTorch implementation towards an inflection point in humankind because it's fun and rewarding to them.
And even if we're supposed to stop advancing this tech to prevent irreversible societal change, would it even be possible if we tried? There's 8 billion of us on Earth and metric tons of GPUs in existence. The question of if progress can ever be halted in a state such as ours for in the name of self-preservation is one I'll probably be keeping in mind for the rest of my lifetime.