It sounds like you've missed my point as well. The analogy is only supposed to illustrate that before actually inventing transportation technology (which for a long time
did get faster exponentially), humans had no real basis to understand the tradeoffs inherent in rolling vehicles, floating vehicles, flying vehicles, impulse/rocket vehicles, etc. Neither did they share our current understanding of a
theoretical maximum speed that anything can ever go according to physics.
> AGI is fundamentally different because an AGI can design an even better AGI
Thanks for pointing this out, but while I think I understand the distinction ("AGI is technology that works like humans, and since humans can design better technology, an AGI can design better versions of itself") that statement also relies on several axioms:
1. Humans can design a general intelligence.
2. A general intelligence can exist in a stable state with a fundamentally "better" design than ours (i.e. one that can be exponentially more powerful, not just a bit better at poetry).
3. A general intelligence can improve itself and/or design better versions of itself without hitting diminishing returns, or it can design a fundamentally better version of itself from scratch if that happens.
It's fine if you believe all of those things, and I guess lots of people do, but I wouldn't just sweep those axioms under a blanket statement about AGI designing better AGI unless you know that everyone agrees with them.