But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills.
AI art seems to basically only be viable when it can’t be identified as AI art. Which might not matter if the intention is to replace cheap graphic design work. But it’s certainly nowhere near developed enough to create anything more sophisticated, sophisticated enough to both read as human-made and have the imperfect artifacts of a human creator. A lot of the modern arts are also personality-driven, where the identity and publicity of the artist is a key part of their reception. There are relatively few totally anonymous artists.
Beyond these very specific examples, however, I don’t think it follows that all or most jobs are going to be replaced by an AI, for the reasons I already stated. You have to factor in the sociopolitical effects of technology on its adoption and spread, not merely the technical ones.
If it were simple, we wouldn't need neural nets for it - we'd just code the algorithm directly. Or, at least, we'd be able to explain exactly how they work by looking at the weights. But now that we have our Babelfish, we still don't know how it really works in details. This is ipso facto evidence that the task is very much not simple.
Imperfection is not the problem with "AI Art". The problem is that it is really hard to not get the models to produce the same visual motifs and cliches. People can spot AI art so easy because of the motifs.
I think midjourney took this to another level with their human feedback. It became harder and harder to not produce the same visual motifs in the images to the point it is basically useless for me now.
I hope you're right, but when I think about all those lawyers caught submitting unproofread LLM output to a judge... I'm not sure humankind is wise enough to avoid the slopification.
The usual solution is to specify one language as binding, with that language taking priority if there turns out to be discrepancies between the multiple version.
There are bound to be all kinds of complicated sociopolitical effects, and as you say there is a backlash against obvious AI slop, but what about when teams of humans working with AI become more skillful at hiding that?
IMO these are terrible, I don't understand how anyone uses them. This is coming from someone who has always loved audiobooks but has never been particularly precious about the narrator. I find the AI stuff unlistenable.
This simply isn't true, unless you're considering any minor refinement to a human-created design to be "often done with AI".
It certainly sounds like you're implying AI is often the initial designer or primary design tool, which is completely incorrect for major publishers and record labels, as well as many smaller independent ones.
I found your post “Coding with LLMs in the summer of 2025 (an update)” very insightful. LLMs are memory extensions and cognitive aides which provide several valuable primitives: finding connections adjacent to your understanding, filling in boilerplate, and offloading your mental mapping needs. But there remains a chasm between those abilities and much work.
These suck. Things made with AI just suck big time. Not only are they stupid but they have negative value on your product.
I cannot think of single purely AI made video, song or any form of art that is any a good.
All AI has done is falsely convince ppl that they can now create things that they had no skills to do before AI.
Songs right now are terrible. For the videos, things are going to be very different once people can create full movies in their computers. Many will have access to the ability to create movies, and a few will be very good, and this will likely change many things. Btw this stupid "AI look" is only transient and is nowhere needed. It will be fixed, and AI images/videos generation will be impossible to stop.
It'll only stand on its own when significant work is required. This is possible today with writing, provided the AI is directed to incorporate original insights.
And unless it's immediately obvious to consumers a high level of work has gone into it, it'll all be tarred by the same brush.
Any workforce needs direction. Thinking an AI can creatively execute when not given a vision is flawed.
Either people will spaff out easy to generate media (which will therefore have no value due to abundance), or they'll spend time providing insight and direction to create genuinely good content... but again unless it's immediately obvious this has been done, it will again suffer the tarring through association.
The issue is really one of deciding to whom to give your attention. It's the reason an ordinary song produced by a megastar is a hit vs when it's performed by an unsigned artist. Or, as in the famous experiment, the same world class violinist gets paid about $22 for a recital while busking vs selling out a concert hall for $100 per seat that same week.
This is the issue AI, no matter how good, will have to overcome.
Maybe you’re a gentleman of such discerningly superior taste that you can always manage to identify the spark of human creativity that eludes the rest of us. Or maybe you’ve just told yourself you hate it and therefore you say you always do. I dunno.
As someone who speaks more than one language fairly well: We can tell. AI translations are awful. Sure, they have gotten good enough for a casual "let's translate this restaurant menu" task, but they are not even remotely close to reaching human-like quality for nontrivial content.
Unfortunately I fear that it might not matter. There are going to be plenty of publishers who are perfectly happy to shovel AI-generated slop when it means saving a few bucks on translation, and the fact that AI translation exists is going to put serious pricing pressure on human translators - which means quality is inevitably going to suffer.
An interesting development I've been seeing is that a lot of creative communities treat AI-generated material like it is radioactive. Any use of AI will lead to authors or even entire publishers getting blacklisted by a significant part of the community - people simply aren't willing to consume it! When you are paying for human creativity, receiving AI-generated material feels like you have been scammed. I wouldn't be surprised to see a shift towards companies explicitly profiling themselves as anti-AI.
I also disagree that it's "not even remotely close to reaching human-like quality". I have translated large chunks of books into languages I know, and the results are often better than what commercial translators do.