All of these are just human being exposed more to life and learning new skills, in other words -- having more data. LLM already learns those skills and encounters endless experience of people in its training data.
> I hate this argument
That's very subjective. You don't know how the brain works.
> That's very subjective
I was expressing my opinion of this argument which absolutely is subjective
> You don't know how the brain works.
Neither does grandparent comment's author, didn't stop them from making much bolder claims.
If I see a painting, I see an interpretation that makes me think through someone else's interpretation.
If I see a photograph, I don't analyze as much, but I see a time and place. What is the photographer trying to get me to see?
If I see AI, I see a machine dithered averaging that is/means/represents/construes nothing but a computer predicted average. I might as well generate a UUID, I would get more novelty. No backstory, because items in the scene just happened to be averaged in. No style, just a machine dithered blend. It represents nothing no matter the prompt you use because the majority is still just machine averaged/dithered non-meaning. Not placed with intention, focused with real vision, no obvious exclusions with intention. Just exactly what software thinks is the most average for the scene it had described to it. The better AI gets, the more average it becomes, and the less people will care about 'perfectly average' images.
It won't even work for ads for long. Ads will become wild/novel/distinct/wacky/violations of AI rules/processes/techniques to escape and belittle AI. To mock AI. Technically perfect images will soon be considered worthless AI trash. If for no other reason than artists will only be rewarded for moving in directions AI can't going forward. The second Google/OpenAI reach their goal, the goal posts will move because no one wants procedural/perfectly average slop.