Such a more cautions approach would go against the silicon valley ethos of do first, ask questions later, though. So it probably won't happen.
We're racing into a fundamentally deep and irreversible societal shift, at least the same order of magnitude as the agricultural or industrial revolution. Maybe even many orders of magnitude deeper. Society will change so profoundly, it will be at least as unrecognizable as our lives would look to the average person from the Bronze age. There's absolutely no reason to assume this will be a good change. If it's not something I personally will have to live with, my descendants most certainly will.
I'll admit, I also draw a blank when I try to imagine what the consequences of all this will be, but it's a blank as in "staring into a pitch black room and having no idea what's in it" - not ignoring the darkness altogether. Mass psychosis is a good term for this, I think.
The collective blindspot failing to understand that there's NOTHING that says we're gonna 'make it'.
There's no divine being out there watching out for us. This isn't a fucking fairy tale, you can't assume that things will always 'work out'. Obviously they've always worked out until now because we're able to have this conversation, but that does NOT mean that things will work out indefinitely into the future.
Baseless conjecture: I think we are biased towards irrational optimism because it's an adaptive trait. Thinking everything will work out is better than not, because it means you're more likely to attempt escaping a predator or whatever despite a minuscule chance of success (which is better than not trying at all). It's another entry into the list of instincts we've inherited from our ancestors which bite us in the ass today (like being omnivorous, liking sweets, tribalism, urge to reproduce, etc).
You seem like you've given this a bunch of thought, and I wanna chat more about this and pick your brain about a few things. Have you ever thought about whether this intersects with the Fermi paradox somehow?
Drop me a line here: l7byzw6ao at mozmail dot com
(The Fermi paradox is also the kind of thing discussed on LessWrong.)
Yes knowledge worker jobs may significantly suffer, but that is far from being ‘humanity’.
It seems to me that professions that involve interacting with the real world could go largely untouched (dentists, factory workers, delivery people, drivers, anyone working with nature).
Of course, feel free to hit me up with your counter-arguments!
people talk about whether or not AGI will come in the next five years. that doesnt matter at all. what matters is whether or not there is a chance that it will happen. it is clear that if AGI arrives soon and if it damages society, future generations will look back on us and say that we were unbelievably stupid for overlooking such blatant and obvious warning signs. if it could be determined that AGI is something that should be avoided at all costs, an it can, then there is no reasonable course of action other than halt the progress of AI as much and quickly as possible. and to make an attempt to do so even if success is not guaranteed.
ill just go through it as quickly as possible. the emergence of AGI would be highly detrimental to human society because it would create severe economic shocks, it would advance science and technology quickly enough to create the most severe power vacuum in the history of the world and render the very concept of a country geopolitically untenable. it would transform the world into something totally unrecognizable and into a place where human industry is not just redundant but cosmically irrelevant. we will become a transient species, wiped out because we posed the slightest inconvenience to the new machine meta-organisms. like a species of plant wiped out because of a chemical byproduct of some insignificant industrial process. a nightmare.
I also find it funny how the paperclip maximizer scenarios are at the forefront of the alignment people's thoughts, when even an aligned AI would reduce humanity to a useless pet of the AGI. I guess some can find such an existence pleasant, but it would be the end of humanity as a species with self-determination nonetheless.
An economic system has two purposes: to create wealth, and to distribute wealth.
The purpose of an economic system is not to provide people with jobs. Jobs are just the best way we've found thus far to create and distribute wealth.
If no one has to work but wealth is still being created, then we just need to figure out a new way to distribute wealth. UBI will almost certainly be a consequence of the proliferation of AI.
the only reasons humans persist is because we are the best. if another country wages war with us, humans will be the winner no matter the outcome. but with AGI, humans wont always be the winner. even if we managed to create some kind of arrangement where the goods and services created by an automated economy were distributed to a group of humans, that would end very quickly because some other class of meta-organism, made into the meanest and fittest meta-organism by natural selection among the machines, a gnarled and grotesque living nightmare, would destroy that last enclave of humans perhaps without even realizing it or trying to. axiomatically, long term, your idea doesnt work.
Let’s take for example the fact that earth is likely to become inhabitable in a few centuries / millennias. The only thing that can save us is unprecedented technological advancement in energy, climate, or space travel. Maybe humans won’t be able to solve that problem, but A.I will. So even if we lose our jobs, it will still be a benefit.
Kind of like wild animals are unable to solve environmental problems that would lead to their extinctions, but us humans, the superior species, are able to protect them (when we make an effort to at least).
I also think it will occur much sooner than most people expect. Maybe 5 years for all people to be replaced.
However, I don't think that is inherently bad.
Even if this means the extinction of mankind, as long as we inherit this planet to some form of "life", or some replicating mechanism that's capable of thinking, feeling, and enjoying their "life", I'm fine with it.
Our focus should be on avoiding this situation to turn into slavery and worldwide tyranny.
One hypothetical example: it decides to "help" us and prevent any more human pain and death, so it cryogenically freezes all humans. now its goal is complete so it simply halts/shuts-down
like OpenAI(2016) https://web.archive.org/web/20151222103150/https://openai.co...