- Agenda
- Straight-forward Fear of AI
- Fear that AI might trigger some really-not-so-bad social changes or upheavals...which *are* so bad for them & their friends
- Mirroring the fears of their peers
Well worth noting: the "leaders" talking about AI are not magically wise, nor especially foresighted, nor widely experienced. They've mostly gotten to be leaders by being utterly obsessed with getting ahead in the human social hierarchy, and devoting their lives to doing that in some narrow social niche or other. There are human-nature reasons why the leaders in ~every historical major industry failed to be leaders in the industry which replaced it.Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
I'm convinced the threat is real, but have no idea what the timeline is. I hope, like most things, we'll skate by, and just stop calling it AI once it happens, and treat it like any other tool. I strongly doubt that is true.
I suspect what will actually happen is that peak oil will catch us off guard, and we won't have the spare power available to train GPT7, and that will avert the singularity.
I see the danger, let me give an analogy.
What if, according to the laws of physics, it were possible to make a thermonuclear weapon out of beach sand using a microwave oven.
That's something so absurd that we'd never figure it out, but AGI could. That scale of dangerously destabilizing knowledge could show up at any time from a superintelligent AGI.
Its bad enough that nation-states have the resources to make civilization ending weapons. I think AGI could super-empower those with access to it.
---
On the other hand, what if it were possible to make unlimited clean energy using beach sand, a microwave oven and some whiskey as a catalyst. AGI could make that future possible as well.