One of my friends sent me a delightful bastardization of the famous IBM quote:
A COMPUTER CAN NEVER FEEL SPITEFUL OR [PASSIONATE†]. THEREFORE A COMPUTER MUST NEVER CREATE ART.
Hate is an emotional word, and I suspect many people (myself included) may leap to take logical issue with an emotional position. But emotions are real, and human, and people absolutely have them about AI, and I think that's important to talk about and respect that fact.
† replaced with a slightly less salacious word than the original in consideration for politeness.
Please don't. That offends me much more than a very mild word ever could.
I also think the 10 hours of random electro swing or other genres of generated music is of extremely high quality. It isn't bland music, on the contrary it is playful and varied. Example:
https://www.youtube.com/watch?v=LmUSK1IjoQg&list=RDLmUSK1Ijo...
It is entertaining and a viewing experience. And yet, I still doesn't feel the same if you know it is just generated by some carefully selected prompts. Sure, that itself is a creative endeavor, but I would have preferred for AI to clean my room for me instead of slowly replacing every creative venue from writing to art to music.
I continue to play music myself, but I will never reach a level AI is able to achieve in a few minutes. Sure, this example certainly took a while to create and the result is awesome. So what do we do with all the superfluous artists now?
it was extremely bland... dry as an oat in a flash freezer...
1. You're on the internet. Nobody will get mad if you say "horny".
2. Bastardizing a quote is a worse outcome than you missing an opportunity to virtue signal your puritan values. Just say the original quote.
For example, someone can feel like they already have to compete with people, and that's nature, but now they have to compete with machines too, and that's a societal choice.
I am interested in the intelligible content of the thing.
Also, AI does not reason. Human beings do.
Can other humans (aka NPCs)? They seem like they do so I treat them as such, but as far as I know other humans and a sufficiently emoting AI both act equally like they feel emotions.
You hear what you want to hear. You think fine artists - and really, how many working fine artists do you really know? - don't have sincere, visceral feelings about stuff, that have nothing to do with money?
How could a practical LLM enthusiast make a non-economic argument in favor of their use? They’re opaque usually secretive jumbles of linear algebra, how could you make a reasonable non-economic argument about something you don’t, and perhaps can’t, reason about?
AI is not intelligent or emotional. It's not a "strongly held belief" it simply hasn't been proven.
Tools do not dictate what art is and isn't, it is about the intent of the human using those tools. Image generators are not autonomously generating images, it is the human who is asking them for specific concepts and ideas. This is no different than performance art like a banana taped to a wall which requires no tools at all.
I looked up Picasso's Guernica now out of curiosity. I don't understand what's so great about this artwork. Or why it would represent any of the things you mention. It just looks like deranged pencilwork. It also comes across as aggressively pretentious.
What makes that any better than some highly derivative AI generated rubbish I connect to about the same amount?
Neither will a paintbrush.
The tool does need to, though.
One technical definition of empathy is understanding what someone else is feeling. In war you must empathize with your enemy in order to understand their perspective and predict what they will do next. This cognitive empathy is basically theory of mind, which has been demonstrated in GPT4.
https://www.nature.com/articles/s41562-024-01882-z
If we do not assume biological substrate is special, then it's possible that AIs will one day have qualia and be able to fully empathize and experience the feelings of another.
It could be possible that new AI architectures with continuously updating weights, memory modules, evolving value functions, and self-reflection, could one day produce truly original perspectives. It's still unknown if they will truly feel anything, but it's also technically unknowable if anyone else really experiences qualia, as described in the thought experiment of p-zombies.
I'm being slightly flippant but I do think this is a motte and bailey argument.
Not even painting is a Guernica nor does it need to be.
And not every aesthetically pleasing object is art. (And finally - art doesn't even have to be aesthetically pleasing. And actually finally "art" has a multitude of contradictory meanings)
My computer does. What evidence would change your mind?
Now, just like you can with Studio Ghibli art, you can generate new images in the style of Guernica.
On the other hand, if I saw a product labelled "No AI bullshit" then I'd immediately be more interested.
But that's just me, the AI buzz among non-techies is enormous and net-positive.
Which, granted, describes most companies. But ultimately they do not serve you or your technical needs, because they are literally incapable of understanding them. Any intersection between your technical needs and their provisions is of pure coincidence.
Almost like its all emotional-level gimmicks anyways.
If I see "No AI bullshit" I'd be as skeptical if it said "AI Inside". Corpos tryina squeeze a buck will resort to any and all manipulative tactics.
Plague of our ages I guess. Ironically AI might even make it worse.
And then we'll wait till the next bubble.
Gains seem to have leveled off tremendously. As far as I can tell folk were saying "Wow, look at this, I can get it to generate code... it does really well at tests, and small well defined tasks"
And a year or a year and a half later we're at like... that + "it's slightly better than it was before!" lol.
So, yea, I dunno, I suspect we'll see a fair amount fall away and some useful things to continue to be used.
Beneficiaries are the ones who care about the actual tech and what it can do for them. Investors are the ones who care about making money off the tech. For the Beneficiaries, AI hype is about right where it should be, given the demonstrable power of the tech itself. For Investors, it may be a dangerous bubble - but then I myself am a Beneficiary, not an Investor, so I don't care.
I don't care which companies get burned on this, which investors will lose everything - businesses come and gone, but foundational inventions remain. The bubble will burst, and then the second wave of companies will recycle what the first wave left; the tech will continue to be developed and become even more useful.
Or put another way: I don't care which of the contestants wins a tunnel-digging race. I only care about the tunnels being dug.
See e.g. history of rail lines, and arguably many more big infrastructure projects: people who fronted the initial capital did not see much of a return, but the actual infrastructure they left behind as they folded was taken over and built upon by subsequent waves of companies.
Also you seem to forget that irrespective of cash profits in the future, will this investment generate excess returns? Nope. That's what investors care about. Its not even profit actually.
My nuanced position is that it's great in some niche scenarios - speech to text as an example, or for isolating instruments in audio - and vastly overhyped in everything else, like LLMs. It's a mediocre google searcher at best.
For me, I kind of wish this site to go back to the good old days where people just share their nerdy niche hacker things and not filling the first page with the same arguments we see on the other parts of the internet over and over again. ; ) But granted I was attracted by the clickbait title too, so I can't blame others.
Crypto always had hard to understand and abstract use cases. It became popular because the value was going up.
LLMs are different. There are an endless amount of use cases that people can easily understand. Now, just how well it does things is debatable but there is a very clear value gain.
Hell, I got it give me a list of recipes for the week based on my preferences and dietary needs then created a grocery list in 2 minutes. Did I need an LLM for this? No, but it made it so much faster and this is what I am finding with a lot of tasks.
It feels very much like the crypto bubble a few years back (the second, larger one, when we were informed that soon everything would be an NFT). This is actually one thing that puts me off AI; on top of a certain amount of scepticism about whether it is actually useful, the whole space feels very, very, _very_ grifter-y. In some cases it is literally the same people who were pushing NFTs a while back.
Just the other day someone posted the ImageNet 2012 thread (https://news.ycombinator.com/item?id=4611830), which was basically the threshold moment that kickstarted deep learning for computer vision. Commenters claimed it doesn't prove anything, it's sensational, it's just one challenge with a few teams, etc. Then there is the famous comment when Dropbox was created that it could be replaced by a few shell scripts and an ftp server.
"I strongly feel that AI is an insult to life itself." - Hayao Miyazaki
I'm going to start using this quote. Studio Ghibli producer, Suzuki: "So, what is your goal?"
ML Developer: "Well, we would like to build a machine that can draw pictures like humans do."
<jump cut>
Miyazaki VO: "I feel like we are nearing to the end of times."
"We humans are losing faith in ourselves."
Source: https://www.youtube.com/watch?v=ngZ0K3lWKRcOf course, the form of AI has changed over the years, but the claim that this quote could be tied to Miyazaki's general view on having machines create art is not totally baseless.
Look at all the AI-written and AI-illustrated articles being published this year. Look at how smooth the image slop is. Look at how fluent the text slop is. Higher quality slop doesn't change the fact that nobody could be bothered to write the thing, and nobody can be bothered to read it.
As if it's in any way less horrifying having the entire Internet infested with AI slop.
Wish some of the AI detectors realized when they're doing a worse job reasoning than the LLMs they criticize.
The quote was taken a little bit out of context.
Regardless of how you feel about AI, the specific instance Miyazaki was reacting to was, indeed, an insult to life itself!
He's right that to someone who's art is about capturing the world through a child's eyes, the dreamlike consonance of everyday life with simple fantasy, this is abominable.
So that's definitely a misquote, though I wouldn't be surprised if Miyazaki dislikes AI.
The author is also changing the subject of the quote.
He said it reminded him of a disabled friend that this technology was an insult to life itself.
Me, I hate the externalities, but I love the thing. I want to use my own AI, hyper optimized and efficient and private. It would mitigate a lot. Maybe some day.
It's weird how AI-lovers are always trying to shoehorn an unsupported "it does useful things" into some kind of criticism sandwich where only the solvable problems can be acknowledged as problems.
Just because some technologies have both upsides and downsides doesn't mean that every technology automatically has upsides. GenAI is good at generating these kinds of hollow statements that mimic the form of substantial arguments, but anyone who actually reads it can see how hollow it is.
If you want to argue that it does useful things, you have to explain at least one of those things.
- Actually knowing things / being correct - Creating anything original
It's good at
- Producing convincing output fast and cheap
There are lots of applications where correctness and originality matter less than "can I get convincing output fast and cheap". Other commenters have mentioned being able to vibe-code up a simple app, for example. I know an older man who is not great at writing in English (but otherwise very intelligent) who uses it for correspondence.
Who said "every technology?" We're talking about a specific one here with specific up and downsides delineated.
But you shouldn't expect it do take over your actual thinking, because it doesn't actually think. So it's just another tool in the toolbox that can be useful for some applications, but not for all. If you use it for the appropriate tasks, it can be very helpful. If you try to do everything with it, you'll be disappointed.
"Its power seems inescapable. So did the divine right of kings." — Ursula K. Le Guin
Source for this claim? Are you still using Groupon?
Your argument could just as easily be applied to social networks ("are you still using friendster?") or e-commerce ("are you still using pets.com?). GPT3 or Kimi K2 or Mistral is going to become obsolete at some point, but that's because the succeeding models are going to be fundamentally better. That doesn't mean that they weren't themselves fit for a certain task.
Do you still use the internet?
Just like crypto.
Just look at the bitcoin hashrate; it’s a steep curve.
This paragraph really pisses me off and I'm not sure why.
> Critics have already written thoroughly about the environmental harms
Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?
> the reinforcement of bias and generation of racist output
Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess
>the cognitive harms and AI supported suicides
There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this. I won't deny it's an issue but to act like it's being ignored by the industry is a miss completely.
>the problems with consent and copyright
This is the best argument on the page imo, and even that is highly debated. I agree with "AI is performing copyright infringement" and see constant "AI ignores my robots.txt". I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped *me* from saving images or pirating movies.
Then the rest touches on ways people will feel about or use AI, which is obviously just as much conjecture as anything else on the topic. I can't speak for everyone else, and neither can anyone else.
A "small" 7 rack, SOTA CPU cluster uses ~700KW of energy for computing, plus there's the energy requirements of cooling. GPUs use much more in the same rack space.
In DLC settings you supply 20-ish degree C water from primary circuit to heat exchanger, and get it back at 40-ish degree C, and then you pump this heat to environment, plus the thermodynamic losses.
This is a "micro" system when compared to big boys.
How there can be no environmental harm when you need to run a power plant on-premises and pump that much heat in much bigger scale 24/7 to environment.
Who are we kidding here?
When this is done for science and intermittently, both the grid and the environment can tolerate this. When you run "normal" compute systems (e.g. serving GMail or standard cloud loads), both the grid and environment can tolerate this.
But running at full power and pumping this much energy in and heat out to train AI and run inference is a completely different load profile, and it is not harmless.
> the cognitive harms and AI supported suicides
Extensive use of AI is shown to change brain's neural connections and makes some areas of brain lazy. There are a couple of papers.
There was a 16 year old boy's ChatGPT fueled death on the front page today, BTW.
> This is the best argument on the page imo, and even that is highly debated.
My blog is strictly licensed with a non-commercial and no-derivatives license. AI companies gets my text, derives it and sells it. No consent, no questions asked.
Same models consume GPL and Source Available code the same and offer their derivations to anyone who pays. Again, infringing both licenses in the process.
Consent & Copyright is a big problem in AI, where the companies wants us to believe otherwise.
> There is constant active rhetoric around the sycophancy, and ways to reduce this, right? OpenAI just made a new benchmark specifically for this.
We have investigated ourselves and found no wrongdoing
> Im uneducated here, honestly. I don't ask a lot of race-based questions to my LLMS I guess
Do you have to ask a race-based question to an LLM for it to give you biased or racist output?
I don't think they, have, no. Perhaps I'm overlooking something, but their most recent technical paper [0], published less than a week ago, states, "This study specifically considers the inference and serving energy consumption of an AI prompt. We leave the measurement of AI model training to future work."
No, they showed that the environmental impact of using a smaller AI embedded on Google results uses less power to train and run than using something SOA. That's quite a different thing altogether.
> I don't ask a lot of race-based questions to my LLMS I guess
You don't need to ask explicit questions to receive answers where bias is implicitly stated. You've dismissed the argument out of hand without actually meeting it.
> I won't deny it's an issue but to act like it's being ignored by the industry is a miss completely.
The claim was that critics had been vocal about it, not that it had been ignored by the industry.
> I also grew up being told that ANYTHING on the internet was for the public, and copyright never stopped me from saving images or pirating movies.
Policing is always very patchy. You maybe broke the law and got away with it as an individual, that's common. The issue is that these huge businesses can do a level of copyright infringement, and do it on a for-profit basis, while smaller businesses would be eradicated for attempting the same thing, and the artists they're taking from would face similar issues if they attempted even a fraction of that level of plagiarism.
You can't even ask it anything for genuine curiosity it starts to scold you and makes assumptions that you are trying to be racist. The conclusions I'm hearing are weird. It reminds me of that one Google engineer who quit or got fired after saying AI is racist or whatever back in like 2018 (edit: 2020).
That's a crazy argument to accept from one of the lead producers of the technology. It's up there with arguing that ExxonMobil just proved oil drilling has no impact on global warming. I'm sure they're making the argument, but they would be doing that wouldn't they?
I'd be interested to see that report as I'm not able to find it by Googling, ironically. Even so, this goes against pretty much all the rest of the reporting on the subject, AND Google has financial incentive to push AI, so skepticism is warranted.
> I don't ask a lot of race-based questions to my LLMS I guess
The reality is that more and more decision making is getting turned over to AIs. Racism doesn't have to just be n-words and maga hats. For example, this article talks about how overpoliced neighborhoods trigger positive feedback loops in predictive AIs https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-...
> Copyright never stopped me from saving images or pirating movies.
I think we could all agree that right-clicking a copyrighted image and saving it is pretty harmless. Less harmless is trying to pass that image off as something you created and profiting from it. If I use AI to write a blog post, and that post contains plagiarism, and I profit off that plagiarism, it's not harmless at all.
> I also grew up being told that ANYTHING on the internet was for the public
Who told you that? How sure are you they are right?
Copilot has been shown to include private repos in its training data. ChatGPT will happily provide you with information that came from textbooks. I personally had SunoAI spit out a song that whose lyrics were just Livin' On A Prayer with a couple of words changed.
We can talk about the ethical implications of the existence of copyright and whether or not it _should_ exist, but the fact is that it does exist. Taking someone else's work and passing it off as your own without giving credit or permission is not permitted.
I think the main problem for me is that these companies benefit from copyright - by beating anyone they can reach with the DMCA stick - and are now also showing they don't actually care about it at all and when they do it, it's ok.
Go ahead, AI companies. End copyright law. Do it. Start lobbying now.
(They won't, they'll just continue to eat their cake and have it too).
So far, case law is shaping up towards "nope, AI training is fair use". As it well should.
All these points are just trying to forcefully legitimise his hatred.
Also, I think their lean towards a political viewpoint is worth some attention. The point is a bit lost in the emotional ranting, which is a shame.
(To be fair, I liked the ranting. I appreciated their enjiyment of the position they have reached. I use LLMs but I worry about the energy usage and I’m still not convinced by the productivity argument. Their writing echoed my anxiety and then ran with it into glee, which I found endearing.)
It's pretty clear there are impacts, AI needs energy, consumes material, creates trash.
You probably just don't mind it. The fact is still fact, the conclusion is different, you assess it's not a big concern in the grand scheme of it and worth it for the pros. The author doesn't care much for the pros, so then any environmental impact is a net loss for them.
I feel both take are rational.
> Together, the nation’s 5,426 data centers consume billions of gallons of water annually. One report estimated that U.S. data centers consume 449 million gallons of water per day and 163.7 billion gallons annually (as of 2021)
> Approximately 80% of the water (typically freshwater) withdrawn by data centers evaporates, with the remaining water discharged to municipal wastewater facilities.
You don't see the difference, or are you willfully ignorant?
No hate, but consider — when I feel that way, it’s often because one of my ideas or preconceptions has been put into question. I feel like it’s possible that I might be wrong, and I fucking hate that. But if I can get over hating it and figuring out why, I may learn something.
Here’s an example:
> Didn't google just prove there is little to no environmental harm, INCLUDING if you account for training?
Consider that Google is one of the creators of the supposed harm, and thus trusting them may not be a good idea. Tobacco companies still say smoking ain’t that bad
The harm argument is simple — AI data centers use energy, and nearly all forms of energy generation have negative side effects. Period. Any hand waving about where the energy comes from or how the harms are mitigated is, again, bullshit — energy can come from anywhere, people can mitigate harms however they like, and none of this requires LLM data centers.
Presented like this, the argument is complete bullshit. Anything we do consumes energy, therefore requires energy to be supplied, production of which has negative side effects, period.
Let's just call it a day on civilization and all (starve to death so that the few survivors can) go back to living in caves or up the trees.
The real questions are, a) how much more energy use are LLMs causing, and b) what value this provides. Just taking this directly, without going into the weeds of meta-level topics like the benefits of investment in compute and energy infrastructure, and how this is critical to solving climate problems - just taking this directly, already this becomes a nothing-burger, because LLMs are by far some of the least questionable ways to use energy humanity has.
You're not uneducated, but this is a common and fundamental misunderstanding of how racial inequity can afflict computational systems, and the source of the problem is not (usually) something as explicit as "the creators are Nazis".
For example, early face-detection/recognition cameras and software in Western countries often had a hard time detecting the eyes on East Asian faces [0], denying East Asians and other people with "non-normal" eyes streamlined experiences for whatever automated approval system they were beholden to. It's self-evident that accurately detecting a higher variety of eye shapes would require more training complexity and cost. If you were a Western operator, would it be racist for you to accept the tradeoff for cheaper face detection capability if it meant inconveniencing a minority of your overall userbase?
Well, thanks to global market realities, we didn't have to debate that for very long, as any hardware/software maker putting out products inherently hostile to 25% of the world's population (who make up the racial majority in the fastest growing economies) weren't going to last long in the 21st century. But you can easily imagine an alternate timeline in which Western media isn't dominant, and China & Japan dominate the face-detection camera/tech industry. Would it be racist if their products had high rates of false negatives for anyone who had too fair of skin or hair color? Of course it would be.
Being auto-rejected as "not normal" isn't as "racist" as being lynched, obviously. But as such AI-powered systems and algorithms have increasing control in the bureaucracies and workflows of our day to day lives, I don't think you can say that "racist output", in the form of certain races enjoying superior treatment versus others, is a trivial concern.
[0] https://www.cnn.com/2016/12/07/asia/new-zealand-passport-rob...
Of course, they hide the truth in plain site: inference is a drop in the ocean compared to training.
I don't hate AI. I hate the people who're in love with it. The culture of people who build and worship this technology is toxic.
From the point of view of a typical, not very curious kid or teen AI seems like a godsend. Now you don't have to put much effort in a lot of things you don't want to do to begin with.
Fool me once, and all that.
"[The Analytical Engine] might act upon other things besides number, were objects found whose mutual fundamental relations could be expressed by those of the abstract science of operations, and which should be also susceptible of adaptations to the action of the operating notation and mechanism of the engine... Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent."
- Lovelace, Ada; Menabrea, Luigi (1842). "Sketch of the Analytical Engine invented by Charles Babbage Esq".
So yes, of course I'm excited about AI. I grew up on 1960s sci fi where AI was pervasive, and most of it wasn't dystopian.
What I'm not excited about is the greedy fucks who are largely in control of AI today and who deploy it to the detriment of society at large. But that is a general problem with greedy fucks (and our political and economic system enabling them), not with AI as such. They can, and do, similarly abuse all kinds of technological advancements.
I think the core issue is that until the industrial revolution the world was pretty much static. If you teleported a hunter-gatherer into a medieval village, he'd figure it out. Meanwhile trying to explain 2025 to someone stuck in 2015 is fool's errand. Human brain did not evolve for such rapid environmental changes.
Honestly, the first paragraph is packed full with good talking points, there's definitely a lot of ignoring of the cons of AI happening, I try to remember how I felt when social media first appeared, but I recall loving it, being part of all the hype, finding it amazing, using it all the time...
- Intention to create
- Effort in creation
- Transformation of the medium/canvas
- Originality
- Meaning as interpreted by the artist
- Meaning/influence to the consumer
- Cultural influence of the art
Without an extensive discussion to define all of these terms, I think its fair to say that there are many human-created works with little-to-no amount of many of these factors, yet a lot of people would still classify them as art. Yet if a AI creates something that satisfies just as many or more of these factors, people seem far more hesitant to call it art.
I'm neither Pro or Anti "AI can create art," as defining what qualifies as art has been a futile exercise since forever. I feel similarly about the AI intelligence and consciousness questions; if we can't define it for ourselves, how can we hope to define it for another entity? I think the discussions can be productive in fleshing out your viewpoint, but otherwise are fruitless.
Ultimately I think humans are highly functional biological machines that have created something that can mimic us convincingly, and we should just come to terms with that without getting bogged down in debates over definitions.
we must not accept the charade of humanity in machine-generated regurgitations of the utmost average.
That seems like a succinct way to describe the goal to create conscious AGI.
AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems. They're succeeding.
You can't measure "consciousness", but you sure can measure performance. And the performance of frontier AI systems keeps improving.
We don't know if AGI without consciousness is possible. Some people think that it's not. Many people certainly think that consciousness might be an emergent property that comes along with AGI.
>AI industry doesn't push for "consciousness" in any way. What AI industry is trying to build is more capable systems.
If you're being completely literal, no one wants slaves. They want what the slaves give them. Cheap labor, wealth, power etc...
(Mild spoiler): It has a basic plot point about uploaded humans being used to tackle problems as unknowing slaves and resetting their memories to get them to endlessly repeat tasks.
I just can't take anything the author has to say seriously after the intro.
Firstly, the author doesn't even define the term AI. Do they just mean generative AI (likely), or all machine learning? Secondly, you can pick any of those and they would only be true of particular implementations of generative AI, or machine learning, it's not true of technology as a whole.
For instance, small edge models don't use a lot of energy. Models that are not trained on racist material won't be racist. Models not trained to give advice on suicide, or trained NOT to do such things, won't do it.
Do I even need to address the claim that it's at it's core rooted in "fascist" ideology? So all the people creating AI to help cure diseases, enable technologies assistive technologies for people with impairments, and other positive tasks, all these desires are fascist? It's ridiculous.
AI is a technology that can be used positively or negatively. To be sure many of the generative AI systems today do have issues associated with them, but the authors position of extending these issues to the entirety of the AI and AI practitioners, it's immoral and shitty.
I also don't care what the author has to say after the intro.
I'm serious. This sentence perfectly captures what the coastal cities sound like to the rest of the US, and why they voted for the crazy uncle over something unintelligible.
Good observation.
> And to what end? In a kind of nihilistic symmetry, their dream of the perfect slave machine drains the life of those who use it as well as those who turn the gears. What is life but what we choose, who we know, what we experience? Incoherent empty men want to sell me the chance to stop reading and writing and thinking, to stop caring for my kids or talking to my parents, to stop choosing what I do or knowing why I do it. Blissful ignorance and total isolation, warm in the womb of the algorithm, nourished by hungry machines.
There are legitimate uses for which AI (or any other technology to be clear) would relieve everyone. Chores that people HAVE to do but nobody WANTS to do.
If GenAI allows you to build automations for those tasks, by all means it will make you life more meaningful because you will have more time to spend on meaningful things. Think of opening the tap to get water instead of having to carry a bucket home from the well.
It's fine to hate the people who build AI, it's fine to hate the people who push for AI use, it's fine to hate the people who release garbage built with AI, etc. But hating "AI" is nonsensical. It's akin to hating hammers or shoes, it's just a tool that may or may not fit a job (and personally, like the author, I don't think it fits any job at the moment).
I don't get if AI is supposed to be a slave or a machine. Is it sentient or a toaster?
Ok but what are these? People keep saying right now they are trying to figure out where LLM's fit. Someone, somwhere would've figured it out by now - the world is more interconnected than ever before.
I think the approach with all that is going on is all entirely wrong - you cannot start with the technology and figure out where to put it. You have got to start with the experience - Steve Jobs famously quipped this and his track record speaks for itself. All I'm seeing is experimentation with the first approach which is costly in explicit and implicit form. Nobody from what I see seems to have a visionary approach.
I can see it being useful as a teaching aide but to use it to write my emails, letters or whatever is something I would never consider as it removes the human element which I enjoy. Sure writing sometimes sucks but its supposed to - work is hard and finishing work is rewarding.
Very soon we will see blog posts about AI burnout where mindless copy-pasting of output and boring prompt fiddling sucks so much joy out of life people will begin to loose their sanity.
If I want "AI" I want a model I have full control over, ran locally, to e.g. query my picture collection for "all pictures of grey cats in a window" or whatever. Or point a webcam out of my window and have it tell me when the squirrels are fucking with my bird feeder and maybe squirt water at them but leave the birds alone. That would be cool. But turning programmers into copy pasters, emails into soulless monologues, media with minimal/no human input and so on is something that can die in a fire. It's all low effort which I have no respect for.
Why not look at the broader context instead of flail out against the machine? What is it about society that makes the automation of labor a bad thing?
As for art, it has always been about how you use the materials and resources you have. Photographs didn't make painting obsolete, but they rendered the pursuit of pure realism painting obsolete. 'AI' generated art does not make any other artform obsolete, but it will make the mechanical regurgitation of derivative works obsolete. If you want to do this on your own, like if you want to paint photorealistic paintings, you are still free to.
- When I use AI, it is typically useful.
- When other people build and do things with AI, it's slop that I didn't ask for which is waste of resources and a threat to humanity.
This entirely sums up my thoughts on the technology. I suppose it's rather like the personal benefits vs greater harm of using coal for electricity.
It's easy to use lazily and for use cases that are annoying. But used in the right contexts with the limitations in mind it's personally quite useful indeed.
Although, much of the slop problem is due to lack of consent. Same as how my youtube video to me is entertainment, and noise for the rest of the passengers.
The moral? It's always been an unbalanced society tumbling into the future. Even if AI has both downsides and upsides we will still make it a part of us. Consider the scale - 1B people chatting for the likes of 1T tokens/day. That amount of AI-language has got to influence human language and abilities as well.
Point by point rebuttals:
- environmental harms - so does any use of electricity, fuel or construction
- reinforcement of bias - all ours, reflected back, and it depends on prompting as well
- generation of racist output - depends on who's prompting what
- cognitive harms and AI supported suicides - we are the consequence sink for all things AI, good and bad
- problems with consent and copyright - only if you think abstractions should be owned
- enables fraud and disinformation and harassment and surveillance - all existed before 2020
- exploitation of workers, excuse to fire workers and de-skill work - that is AI being used as excuse, can't be AI's fault
- they don’t actually reason and probability and association are inadequate to the goal of intelligence - apparently you don't need reasoning to win gold at IMO
- people think it makes them faster when it makes them slower - and advanced LLMs are just 2.5 years old, give people time to learn to use it
- it is inherently mediocre - all of us have been at some point
- it is at its core a fascist technology rooted in the ideology of supremacy - LOL, generalizing Grok to all LLMs?
The author mixes hate of AI with hate of people behind AI and hate of how other people excuse their actions blaming AI.
Yeah, "statistics is fascism" - Umberto Eco (probably)
"AI makes me feel stupid" - economically struggling millennial
"This waymo stuff the money goes to big corporations instead of me a hard working American that contributes to the economy" - Uber driver
Meanwhile, all the wealthy business owners are fascinated with it cause they can get things done without having to hire.
I think you need to add the word potentially in front of "get things done". The venn diagram of what current LLMs can do, and what wealthy business owners think LLMs can do, has the smallest of overlaps.
are the authors genuinely or merely performatively ignorant?
Ignorant, to be precise, of the often comical extent to which they very obviously construct—to their own specification and for their purposes—the object of their hostility...?
While dismissing—in a fashion that renders their reasoning vacuous—the wearying complexity of the actually-observable complex reality they think they are attacking?
One of the most obvious "tells" in this sort of thing is the breezy ease with which abstract _theys_ are compounded and then attacked.
I'm sorry, Anthony; there is no they. There is a bewildering and yes, I get it, frightening and all but inconceivable number of actors, each pursuing their own aims, sometimes in explicit or implicit collusion, sometimes competitively or adversarially...
...and that is but the most banal of the dimensions within which one might attempt to reason about "AI."
Frustration is warranted; hostility towards the engines of surveillance capital and its pleasure with advancing fascism is more than warranted; applications of AI within this domain and services rendered by its corporate builders—all ripe and just targets.
But it is a mistake that renders the critique and position dismisable to slip from specifics to generalities and scarecrows.
Because AI relies on brute force. And at its roots, hacking is DEFINITELY NOT about it.
But there is too much money and greed involved to stop this now. The only thing I can do is avoid any product or service that mentions AI, chatGPT, .ai domain, smart, agent etc. etc.
It feels like we are on a cliff edge, just before every government builds in a dependency on this nightmare technology. Billions more will be wasted whilst the planet burns.
"[AI] is at its core a fascist technology rooted in the ideology of supremacy"
and
"The people who build it are vapid shit-eating cannibals glorifying ignorance."
tl;dr: This person professes to hate AI. They repeat the same arguments as others who hate AI, ignoring that it is an emerging technology with lots of work to do. Regardless of AI's existence, power infrastructure needs to improve and become more environmentally friendly.
Finally, AI is not going away, and we cannot make it away. That cat is out of the bag.
I know it was there the entire time, so what exactly was suppressing the attention towards it? Was it satisfied customers or the companies paying to deplatform the message?
Frankly, it's gotten kind of boring and more recently it's to where I don't even like talking about it anymore. Of course, the non-technical general public is split between those who mistakenly think it's much 'smarter' or more capable than it is and those who dismiss it entirely but often for the wrong reasons. The disappointing part is how deeply polarized many of my more experienced technical friends are between one of those two extremes.
On the positive side there's endless over-the-top raving about how incredible AI is and on the negative side overwhelming angst over how unspeakably evil and destructive AI is. These are people who've generally been around long enough to see long-term trends evolve, hype cycles fade, bubbles burst and certain world-ending doom eventually arrive as just everyday annoyance. Yet both extremes are so highly energized on the topic they tend to leap to some fairly ungrounded, and occasionally even irrational, conclusions. Engaging with either type for very long gets kind of exhausting. I just don't think AI is quite as unspeakably amazing as the ravers insist OR nearly as apocalyptic as the doomers fear - but both groups are so into their viewpoint it borders on evangelical obsession - which makes hard for anyone with an informed but dispassionate, measured and nuanced perspective to engage with them.
I don't care that you hate it. It's the best thing to happen to us in a long time and anyone who disagrees does so on a mountain of privilege. I'm happy for you to have learned everything you know, but to desire to take it away from everyone else is abhorrent to me.
In the end, it doesn't matter what you or I think. You can hate AI, but it's not going away. The industry needs more skeptical, level-headed people to help figure out how best to leverage the technology in a responsible way.
With this the article lost all seriousness for me. I may be on board with a lot of what you are saying, but pretending you know the answer to these questions just makes you look as idiotic as anyone who says the opposite.
>at its core a fascist technology rooted in the ideology of supremacy
>inherently mediocre and fundamentally conservative
>The machine is disgusting and we should break it
Jesus. Unclear why anyone would endorse this blogpost, much less post it on a website focused on computer science and entrepreneurship.
And, conversely, for those who don't share that premise, this article is a good reminder why debating the subject matter is usually pointless. There's no objective argument that you could possibly make to the author and other people like him to convince them otherwise.
All this while consuming more electricity that ever before, during an emerging global climate crisis. And destroying our water supplies to boot. There is no good in any of this.
Miyazaki was absolutely right. Though I'll paraphrase him just a little: Capitalism is an insult to life itself.
These people are insufferable.
Are the companies funding this push for LLMs contributing to healthy cultures? The same companies who ruined societal discourse with social media? The same people who designed their algorithms to be as addictive as possible to drive engagement?
"Why are you selling those?" asked the little prince.
"Because they save a tremendous amount of time," said the merchant. "Computations have been made by experts. With these pills, you save fifty-three minutes in every week."
"And what do I do with those fifty-three minutes?"
"Anything you like..."
"As for me," said the little prince to himself, "if I had fifty-three minutes to spend as I liked, I should walk at my leisure toward a spring of fresh water.”
― Antoine de Saint-Exupéry, The Little Prince
> Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright, the way AI tech companies further the patterns of empire, how it’s a con that enables fraud and disinformation and harassment and surveillance, the exploitation of workers, as an excuse to fire workers and de-skill work, how they don’t actually reason and probability and association are inadequate to the goal of intelligence, how people think it makes them faster when it makes them slower, how it is inherently mediocre and fundamentally conservative, how it is at its core a fascist technology rooted in the ideology of supremacy, defined not by its technical features but by its political ones.
This word salad proves that the author out to stack leftist jabs. I want to be respectful but this paragraph proves that the author does not think for themselves but just uses this as an opportunity to signal that they are the "in group" amongst the tech-cynics.
Post is probably going to get flagged for what its worth
No matter how good things get there will always be people filled with this sort of rage, but what bothers me is how badly this site wants to upvote this stuff.
HN is supposed to gratify intellectual curiosity. HN is explicitly not for political or ideological battle. Fulmination is explicitly discouraged in the guidelines. This article is about as far as I can imagine from appropriate content for HN. I strongly wish that everyone who wants this on the front page would find another site to be miserable on together, and stop ruining this one.
I'm not saying that they are right or wrong, but you should at least respect their right to have their own opinions and fears instead of pointing to an illusory appropriate content for HN.
Many of same concerns and objections people raised about electricity can be applied to AI (everything under the sun back in the day became "electrified" just like AI today; most of those use cases were ridiculous and deserved to be made fun of)
But I think more concerningly though, people like this don't sound like they're a "real" hater- they're positioning themselves in some kind of social signaling kind of way.
I was (and still is) a social media hater, and this person is clearly a child of the social justice / social signaling days of social media, and their entire personality seems to have been shaped by that era, and that's something I'm happy to blame on the tech industry.