In my custom engine I did the world to camera transform for each object on the CPU in double precision, essentially making the camera the origin for all subsequent single precision computations on the GPU. That works for small objects and even large objects that are split into small parts, like a planet split into LOD terrain tiles. But it didn't work for orbit lines because they are a single object at planet scale that you can zoom in on to see at human scale (I didn't have an adaptive tesselation system like the one in the article).
It also wouldn't have worked for galaxy scale, where even double precision wouldn't be enough. I don't know exactly what Celestia and similar "entire universe" apps do. Emulated quad precision floating point?
Edit: I just realized that you are talking about precision issues with the physics engine, while I'm talking about precision issues in the rendering engine. Related but slightly different. Physics engines aren't constrained by GPUs and can use double precision throughout. But they often have stability issues even in normal circumstances so I can certainly imagine that solar system scales would be a problem even in double precision.