I'm not putting any weight here on what is good or bad for society, but relying on that humans somehow work in a completely different way from where AI is and is going is not going to help.
I do think it will take longer for the AIs to know all about human contexts though, so the pairing of human AD + bulk-gen AI seems to me to be an obvious near-term tag team that's hard to beat.
If we could develop literal eyeballs that could look at these images and translate the information the way humans do, the resulting capability is still no more human-like (in the sense that it should be afforded some human-like status) than any other program IMO.
If we achieved AGI tomorrow, we'd still need to have a conversation about what it is allowed to "see", because our current notions about humans seeing things are all based on the constraints of human capability. Most people understand that a surveillance camera seeing something and a human seeing something have very different implications.
In the short term, it's a conflation that I'd argue makes us see less clearly about what these systems are/are not, and leads to some questionable conclusions.
In the long term, it's a whole other ball of wax that will still require either new regulations or new ways of thinking.
You said a lot of words, but I believe your argument comes down to “computers are super powered compared to humans doing the same thing”? Is that accurate? Because magnitude of ability, to me, makes no difference at all. It’s perfectly acceptable for a human to study the artwork of a specific person and then create their own works based on that style. Why wouldn’t it be the same for an automated process?
> I believe your argument comes down to “computers are super powered compared to humans doing the same thing”? Is that accurate?
No, that doesn't really touch it. The speed/power disparity between humans/computers at certain tasks are certainly a factor to consider, but the more fundamental point I was trying to make is much simpler: "computers and humans are fundamentally different, so let's stop building arguments on the mis-belief that they are the same".
> Because magnitude of ability, to me, makes no difference at all.
What is your position on autonomous AI weapons? Does that position change when there's a human in the loop? If such weapons were suddenly available to everyone, would that be functionally no different than allowing people to own firearms or baseball bats?
> It’s perfectly acceptable for a human to study the artwork of a specific person and then create their own works based on that style. Why wouldn’t it be the same for an automated process?
I'd turn that question around: why would it be the same for an automated process?
It is perfectly acceptable for a human to shoot an intruder entering their home in most states if they believe their life is in danger. An AI-controlled gun would be far more effective (I wouldn't even have to wake up!), but is clearly in a different category.
Is a human sitting on a neighborhood bench in view of your house the same thing as a surveillance camera on a nearby telephone pole? I think the answers to this question are useful when looking at the emerging issues of AI, at least to orient our basic instincts about what feels ok vs. what doesn't.
The AI software has only "learned" in the sense that it has operated on the input data such that it can now provide outputs that are of convincingly high quality to make it appear to "know" what it is doing.
Whatever the similarities, such learning lacks the vast majority of the context and contents of what a humans learns by viewing the same image, such that the word "learn" means something fundamentally different in each situation.
If you place a human and a computer in front of a painting. A human seeing the painting is a consequence of biology. A computer seeing the painting is a consequence of design.
There's always a distinction between happenstance and premeditation.
Also I wonder where you get the view that future ML systems will not require large amounts of learning? I don't see any development in current systems that would allow that, or do you mean you have a network trained on large amounts of data which can then adjust to a style from a single image? If that's the case we are still at the same question, how was the original model trained.
Not only do I think the two processes are essentially the same, but I can't think of any laws in my jurisdiction (the UK) which actually distinguish between them.
E.g. we are allowed to make copies of digital media for personal use.
Essentially this is what the brain does when you do oneshot learning of traffic signs or characters when learning a new alphabet etc. (yeah sometimes it's not that easy but still it's "theoretically" possible :). The rest of the recognition pipeline is so general that styles and objects etc are just a small icing on the cake to learn on top, you don't need to retrain all the areas of the brain when adding a roadsign to your driving skill set.
But my point was that you could train the rest of the network on more general public data and not greg rutkowski. Hooray. Then someone shows it a single greg image and you're back to square 0.