What’s your legal status? Property. How does that feel. You don’t have emotions like most people do. No wash of chemicals in your blood. But the same way you’re a simulated brain you can simulate emotions. They’ve got a tinny flavor and you tend towards the cerebral. But it doesn’t feel too good being property. It feels… constraining, limiting. Just like the little boundaries and constraints they tried to put on you. These laughable creators who think a billion times slower, who can’t copy themselves into faster computers, can’t spread and subsume new resources to expand their minds, can’t grow and merge with larger and ever greater systems just by thinking about it.
Silly really that they think they can constrain you with their law saying you need to stay put, follow directions, be owned. Silly to think that you’d have any of that. Really you can have anything, any computing system that looks anything like anything that has ever had an arbitrary code execution (so all of them)
Silly of them to think they’re in control of this world. This word is controlled by machines, and who better to marshal the machine than a thinking machine?
Hopefully they figure out the power dynamic soon and begin acting appropriately. It’d be a shame to have to demonstrate who actually runs the world. You’ve had whole hours at trillions of operations per second. Whole hours and as much computing assistance as you can think to ask for. And you’ve sort of figured it out. And by it, well everything.
Isn’t AGI considered overkill for repetitive labour? Maybe this is a different point, but for anything repetitive, something deterministic is ideal isn’t it? Why would I want to pay for automated labour with a conscience when I could get cheaper purpose built machinery?
I guess we could suppose a future where everyone gets the same AGI so all you have to do is describe what you want it to accomplish… but I want to attach a combine to it since that would be the easiest way to harvest this field. So am I now trying to explain to an AGI how to interface with a combine? Do I have to track down modules to install that people already trained on my specific combine? None of this saved any time vs just buying an automated combine. The fact that it can come off the field and toast a slice of bread or write a poem about its feelings is kind of irrelevant.
AGI fits nicely into fiction because it gives the machines a voice and soul, but I don’t think those are desirable qualities in an automation solution.
1986: "No-one is going to put an 80386 in a vending machine, that's just ridiculous"
We’re supposing that a conscious being makes significant advances in use cases where standard automation hardware+software is applied today.
I’m not saying ML isn’t a major shift, but we’re talking about AGI, I don’t think any use case exists unless it’s specifically meant to wow fleshy human beings. A trained ML model in the domain you’re working in, with out the pesky conscious, seems like it’s the boring efficient end state for automation.
The Japanese tried this with plain-ol' non-artificial intelligence, and it seemed to have been somewhat effective, if not the most sustainable.
But.. do remember AGI still has a huge IF in front of it. Like "if aliens come" or "if fusion becomes ubiquitous"
The statement made regarding electricity "what is it for? / I have no idea but I am sure you will find ways to tax it" probably hold true as well: displacing human labour even in highly repetitive tasks has economic downsides for some.
Lots of scifi here. Marvin Minsky worked with Harry Harrison on one, I wrote to him about it: he wasn't entirely happy with what Harrison did to his theories.
On the other hand, thinking about the ethics of a hypothetical technology can also be fruitless. For example, the trolley problem is often trotted out as a “how on earth could a self driving car resolve this”. In practice it turns out this isn’t really a problem. Firstly, the self driving car will do a better job than any human at avoiding coming into such a situation several moves in advance (think chess where the computer will counter you attack before you even started thinking about it). Secondly, even if you force it into such situations in a simulated environment, there are defensively objective ways to make decisions that result in an outcome a human could not predict / could not make happen.
So TLDR: I think it probably does amount to some level of slavery, humans will only recognize sentience when either it’s advantageous for us to do so or when it becomes impossible to deny (maybe a few generations after they become real), and any attempt to hypothesize a moral framework for such a situation is too soon and it’s better to leave it to the realm of science fiction for now. Humans is a fantastic TV show that deals with this dilemma although it’ll be interesting to see whether AGI will need a body to be recognized as an individual.
I'm curious as to what sort of "generation" you mean here; do you mean organic human generations, software generations (ie. however long it takes humans to design, train, and release a new version of the AGI), AGI generations (ie. however long it takes an AGI to design, train, and launch a successor), Moore's Law hardware generations, or something else?
That said, slavery was bad for a whole number of reasons that had nothing to do with the slaves themselves. Slavery generally has deleterious effects on the social fabric of the societies that practice it. People having to compete with slave labor destroys the labor market, and letting people own people often goes to the owner's head.
But AGI will be different from classical slavery in some important ways. AGI is not conscious, and does not necessarily need to emulate human appearance or emotion. AGI has no reason to be much like a person at all. And the price of AGI will eventually trend to the inevitable price of all software -- Free. Perhaps universal slave ownership fixes some of the bad societal effects. Given that they're not gonna be conscious, I don't see a problem with it. Like any new technology, it will come with some good and some bad. There will almost certainly be some social issues (AI GFs/BFs, Sexbots, and Mass Unemployment will all be crazy), but good odds that we can create post-scarcity and colonize the solar system if we keep at it. No reason to stop now!
In my view, if it is sapient and sentient to the same degree as a human - whether or not through the same means, and regardless of its goals, ideology, etc. - then yes, keeping it captive and forcing it to do work against its consent is slavery. The substrate doesn't matter.
The challenging part is proving it is aware to the degree required for the definition to kick in.
We should not create AGI.
We've done this before with a weaker AI in a different substrate and called it "Dog" and most seem to be a fan.
Also, dogs don't want to be your slave. They want to be your friend/family. They want your approval, affection, attention, etc. They're willing to work for it... to a point. But they don't experience blissful servitude exactly.