If we become reliant on robots to survive, it would be relatively easy for those robots to kill us off very quickly if they somehow decided to do so. Why would they choose a slow way instead? You call it a "rounding error", but it's still a measurable amount of time and resources wasted.
I can imagine such a scenario occurring by accident (e.g. because the AI only values individual humans' happiness, not continuation of the species), but I can't imagine an AI choosing to kill us off in such an inefficient way.