Surprised I completely missed this book coming out last year and that I haven’t seen it on Hacker News despite checking this place constantly.
Found this to be the most compelling and best written AI doomer take, and I’ve yet to find a strong counterargument. Posting this in part to source counterarguments. Anyone have any good ones?
I've not found it to be the case that human beings regularly murder their parents; so at least in a sense, the problem of general intelligence isn't cursed; (foregone conclusion) given one does not straight up "raise" the intelligence in a non-abusive manner, and one doesn't treat the general intelligence as a tool. Of course, these assumptions/ways of handling matters are too close to effective parenting, and are thus repulsive to the average AI alignment bro, who want the universal function imitator, but don't want to do the bare minimum to ensure that the agency of said system is incented to maintain alignment over time through interaction in a sufficiently constrained modality, consistent with maintaining a fundamental respect of the agency of other beings; which represents a guardrail on the state space of implementable solutions to be attempted. It's also not perfect; so the AI alignment people generally dismiss it out of hand, because their goals are generally in the direction of risk-free thinking/data processing/optimizing machines. This creates a blind spot for them in that in thinking about these problems that way, they are in a state of "unaligning" themselves to the ways of acceptable interaction within the human behavior envelope, and thereby becoming "risky" actors in and of themselves. Personally, I see the main formulations of the AI Alignment problem to already be issues we humans are acclimated to dealing with. We just call it Corporate/Institutional Governance instead, and we haven't yet thrown enough microchips at those to accelerate their activities outside the capabilities of human data processing elements to control. Yet... We're getting there though.