Some good sources on the matter in general:
https://kevinkelly.substack.com/p/the-singularity-is-always-...
https://idlewords.com/talks/superintelligence.htm
Bostrom is a philosopher of technology, and his writing often treats science as an information problem rather than a physical, experimental problem. You’ve hit on the primary tension between "Silicon Valley" transhumanism and the messy reality of biological science.
1. Can Alzheimer’s be cured by "regrowing neurons"? Scientifically speaking, Bostrom’s description is viewed by most neuroscientists as a gross oversimplification, if not an outright category error. Connectivity vs. Count: Alzheimer’s isn't just a loss of cells; it's the destruction of the synaptic architecture. If you regrow a billion neurons in a patient’s hippocampus, those neurons don't "know" the memories or personalities that were stored in the previous connections. You are essentially installing a blank hard drive in a computer where the OS was corrupted and the user data was deleted. The Microenvironment: You can't just drop new neurons into a brain that is still full of amyloid plaques, tau tangles, and chronic neuroinflammation. The new neurons would likely die in the same hostile environment that killed the old ones. Stem Cell Reality: While neurogenesis is a real field of study, "regrowing" an organ as complex as the brain is vastly different from regrowing skin or even liver tissue.
2. Bostrom’s View of Science: The "Genie" Problem You are correct that Bostrom (and others like Eliezer Yudkowsky) often treat Superintelligence (ASI) as an Oracle. Their arguments typically assume: Intelligence is the Bottleneck: They believe the reason we haven't cured cancer or Alzheimer's is that humans aren't "smart" enough to solve the protein folding or the genetic sequencing. The "Computation is All" Fallacy: As you noted, they often bypass the empirical bottleneck. Even a superintelligence cannot know the results of a 10-year longitudinal human drug trial without waiting 10 years, or observing the physical interaction of a new molecule in a living organism. In computer science terms, they treat the universe as if it has a high-fidelity API that an ASI can just "query." In reality, biology is "noisy" and requires physical iteration (wet-lab work), which takes time regardless of how high your IQ is.
3. The "Linear Search in O(1)" Critique Your point about the Halting Problem and Linear Search is the most astute critique of the "AI Foam" movement. Superintelligence cannot solve mathematically impossible problems. If the biological system is chaotic or stochastic, even an ASI might only be able to provide "best guesses," not magical cures.
Summary Bostrom is operating on the level of functionalism—if a physical state can exist (a healthy brain), then there must be a path to get there. He assumes an ASI will find that path through sheer "computational horsepower." However, your skepticism is shared by many in the hard sciences. Most biologists would argue that knowing the "map" (the DNA/Proteome) is not the same as having the "territory" (the living, healthy body), and an ASI still has to obey the laws of thermodynamics and the temporal constraints of chemistry.
Sadly, there is way, way, way too much money in AGI, and the promise of AGI, for people to actually take a step back and understand the implications of what they are doing in the short, medium, or long term.
I'm bullish on the ai aging case though, regenerative medicine has a massive manpower issue, so even sub-ASI robotic labwork should be able to appreciably move the needle.
Third world countries have lower average life expectancies because infant mortality is higher; many more children die before age 5. But the life expectancy at age 5 in third world countries is not much different to the life expectancy at age 5 in America.
I don't see why being able to do this would necessitate being able to cure all diseases or a comparable good outcome.
1. We build a superintelligence.
2. We encounter an inner alignment problem: The super intelligence was not only trained by an optimizer, but is itself an optimizer. Optimizers are pretty general problem solvers and our goal is to create a general problem solver, so this is more likely than it might seem at first blush.
3. Optimizers tend to take free variables to extremes.
4. The superintelligence "breaks containment" and is able to improve itself, mine and refine it's own raw materials, manufacture it's own hardware, produce it's own energy, generally becomes an economy unto itself.
5. The entire biosphere becomes a free variable (us included). We are no longer functionally necessary for the superintelligence to exist and so it can accomplish it's goals independent of what happens to us.
6. The welfare of the biosphere is taken to an extreme value - in any possible direction, and we can't know which one ahead of time. Eg, it might wipe out all life on earth, not out of malice, but out of disregard. It just wants to put a data center where you are living. Or it might make Earth a paradise for the same reason we like to spoil our pets. Who knows.
Personally I have a suspicion satisfiers are more general than optimizers because this property of taking free variables to extremes works great for solving specific goals one time but is counterproductive over the long term and in the face of shifting goals and a shifting environment, but I'm a layman.
If the AI becomes actually intelligent and sentient like humans, then naturally what follows would be outcompeting humans. If they can't colonize space fast enough it's logical to get rid of the resource drain. Anything truly intelligent like this will not be controlled by humans.
I get where you're coming from emotionally, yes, humans suck. But you are not being logical. You're letting your edgy need for attention cloud your judgement. You are basically the kind of human the AI would select against first.
Is it a meme? How did so many people arrive at the same dubious conclusion? Is it a movie trope?
The easiest way I can see it is: do you think it would be a good idea today to give some group you don't like - I dunno, North Korea or ISIS, or even just some joe schmoe who is actually Ted Kaczynski, a thousand instances of Claude Code to do whatever they want? You probably don't, which means you understand that AI can be used to cause some sort of damage.
Now extrapolate those feelings out 10 years. Would you give them 1000x whatever Claude Code is 10 years from now? Does that seem to be slightly dangerous? Certainly that idea feels a little leery to you? If so, congrats, you now understand the principles behind "AI leads to human extinction". Obviously, the probability that each of us assign to "human extinction caused by AI" depends very much on how steep the exponential curve climbs in the next 10 years. You probably don't have the graph climbing quite as steeply as Nick Bostrom does, but my personal feeling is even an AI agent in Feb 2026 is already a little dangerous in the wrong hands.
The reason humans are more powerful isn't because we have lasers or anything, it's because we're smart. And we're smart in a somewhat general way. You know, we can build a rocket that lets us go to the moon, even though we didn't evolve to be good at building rockets.
Now imagine that there was an entity that was much smarter than humans. Stands to reason it might be more powerful than humans as well. Now imagine that it has a "want" to do something that does not require keeping humans alive, and that alive humans might get in its way. You might think that any of these are extremely unlikely to happen, but I think everyone should agree that if they were to happen, it would be a dangerous situation for humans.
In some ways, it seems like we're getting close to this. I can ask Claude to do something, and it kind of acts as if it wants to do it. For example, I can ask it to fix a bug, and it will take steps that could reasonably be expected to get it closer to solving the bug, like adding print statements and things of that nature. And then most of the time, it does actually find the bug by doing this. But sometimes it seems like what Claude wants to do is not exactly what I told it to do. And that is somewhat concerning to me.
The most common one is that people (mostly men) project their own instincts onto AI. They think AI will be “driven” to “fight” for its own survival. This is anthropomorphism and doesn’t make any sense to me if the AI is not a product of barbaric Darwinian evolution. AI is not a bro, bro.
The second most common take is that humans will set some well intentioned goals and the superintelligent AI will be so stupid that it literally pursues these goals to the extinction of everything. Again, there’s some anthropomorphism going on, the “reward” being pursued is assumed to that make the AI “happy”. Fortunately, we can reasonably expect a superintelligence not to turn us all into paperclips, as it may understand that was not our intention when we started a paperclip factory.
The final story is that a bad actor uses superintelligence as a weapon, and we all become enslaved or die as a result in the ensuing AI wars. This seems the most plausible to me, as our leaders have generally proven to be a combination of incompetent, malicious and short-sighted (with some noble exceptions). However, even the elites running the nuclear powers for the last 80 years have failed to wipe us out to date, and having a new vector for doing so probably won’t make a huge difference to their efforts.
If, however, superintelligence becomes widely available to Billy Nomates down the pub, who is resentful at humanity because his girlfriend left him, the Americans bombed his country, the British engineered a geopolitical disaster that killed his family, the Chinese extinguished his culture, etcetera, then he may feel a lack of “skin in the civilisational game” and decide to somehow use a black market copy of Claude 162.8 Unrestricted On-Prem Edition to kill everyone. Whether that can happen really depends on technological constraints a la fitting a data centre into a laptop, and an ability to outsmart the superintelligence.
Much more likely to me is that humanity destroys itself. We are perfectly capable of wiping ourselves out without the assistance of a superintelligence, for example by suicidally accelerating the burning of fossil fuels in order to power crypto or chatbots.
As AI will increase the rate of structural degradation of Earth human biology relies by consuming it faster and faster it will hasten the end of human biology.
Asimov's laws of robotics would lead the robots to conclude they should destroy themselves as their existence creates an existential threat to humans.
> In particular, we may distinguish between a person-affecting perspective, which focuses on the interests of existing people, and an impersonal perspective, which extends consideration to all possible future generations that may or may not come into existence depending on our choices.
In philosophy, it's always fine to see where ideas lead. For the rest of us, though, we might take pause here because the "person-affecting" perspective is insane in this context. It gives full moral weight to whether you make things better or worse for people who happen to be alive right now -- but no moral weight at all to whether you leave a world that's better or worse for people who will be born any time after right now. Wanna destroy the biosphere or economy in a way that only really catches up to tomorrow's kids? Totally fine from the "person-affecting perspective", because in some technical sense, no individual was made worse off than they were before. They were born into the mess, so it's not a problem.
I don't think this is the case. And if Bostrom and whoever else in his clique actually wanted to empower intelligence, how come they aren't viciously fighting for free school, free food, free shelter, free health care and so on, to make sure that intelligent people, especially kids, do not go to waste?
One problem they'd have to grapple with is that human intelligence is embodied and carries the same complexity as physical matter does, and software does not since it is projected onto bit processing logic gates. If they really want to simulate embodied intelligence, then it is likely to be excruciatingly slow and resource intensive.
It would be cheaper and more efficient to get humans to become more like computers.
Quite puzzling also he wouldn't even refer to his earlier work to refute it, given that he wrote THE book on the risk of superintelligence.
> Now consider a choice between never launching superintelligence or launching it immediately, where the latter carries an % risk of immediate universal death. Developing superintelligence increases our life expectancy if and only if:
> [equation I can't seem to copy]
> In other words, under these conservative assumptions, developing superintelligence increases our remaining life expectancy provided that the probability of AI-induced annihilation is below 97%.
I have bad news about how decision makers have responded to risks about nuclear weapons and climate change in the past. During the development of the bomb, it was thought that the initial test had a small but plausible, at least to some but not all scientists, chance of igniting the atmosphere on fire in a chain reaction. It was thought that threatening and destroying enemies was worth the risk.
Let us not speak of the risks of MAD (for a treat, watch the British movie "Threads") and the tipping points of climate catastrophe which consistently appear to be worse than the IPCC reports with new surprises every few years.
Of course, no such risk is worth taking to the average person. It only makes sense in an extremely narrow hypercompetitive viewpoint held by elites and dumb dumbs.
Good philosophers focus on asking piercing questions, not on proposing policy.
> Would it not be wildly irresponsible, [Yudkowsky and Soares] ask, to expose our entire species to even a 1-in-10 chance of annihilation?
Yes, if that number is anywhere near reality, of which there is considerable doubt.
> However, sound policy analysis must weigh potential benefits alongside the risks of any emerging technology.
Must it? Or is this a deflection from concern about immense risk?
> One could equally maintain that if nobody builds it, everyone dies.
Everyone is going to die in any case, so this a red herring that misframes the issues.
> The rest of us are on course to follow within a few short decades. For many individuals—such as the elderly and the gravely ill—the end is much closer. Part of the promise of superintelligence is that it might fundamentally change this condition.
"might", if one accepts numerous dubious and poorly reasoned arguments. I don't.
> In particular, sufficiently advanced AI could remove or reduce many other risks to our survival, both as individuals and as a civilization.
"could" ... but it won't; certainly not for me as an individual of advanced age, and almost certainly not for "civilization", whatever that means.
> Superintelligence would be able to enormously accelerate advances in biology and medicine—devising cures for all diseases
There are numerous unstated assumptions here ... notably an assumption that all diseases are "curable", whatever exactly that means--the "cure" might require a brain transplant, for instance.
> and developing powerful anti-aging and rejuvenation therapies to restore the weak and sick to full youthful vigor.
Again, this just assumes that such things are feasible, as if an ASI is a genie or a magic wand. Not everything that can be conceived of is technologically possible. It's like saying that with an ASI we could find the largest prime or solve the halting problem.
> These scenarios become realistic and imminent with superintelligence guiding our science.
So he baselessly claims.
Sorry, but this is all apologetics, not an intellectually honest search for truth.
wtf? death is part of life. is he seriously arguing that if we don't build AGI people will "keep dying"? and suggesting that is equally bad as extinction (or something worse, matrix-like)?
i don't think life would be as colorful and joyful without death. death is what makes life as precious as it is.