Er, yes, but I'm not sure if this will be a satisfying answer:
If you contact IBM Global Services, they have a group that can put together a hosted database proposal with very stringent uptime guarantees. Most likely they'll push for you to be hosted on z10 (mainframe) architectures and DB2. They can run it across a multi-site SYSPLEX in multiple tightly controlled data centers. They've got a handful of customers who have been continuously up since at least the late 1980s in a config like this.
This topic came up at OpsU in SF a couple weeks ago. I think the consensus from all was that looking for "five-9s" is a very bad proxy for asking the question, "What is the cost-benefit of downtime mitigation strategies?" I've worked on systems that required (either because of regulation or health and safety) appreciably 100% uptime. The cost for near-perfect uptime almost never balances against the cost of downtime including lost revenue, lost customer confidence and the like.
Now, one of those applications did happen to be a telecom application (a switch), and there was, before deregulation, a universally accepted requirement that billing records must continuously capture 99.999% of the time. No clue if that still exists, but if that's you, there are about a kajillion preexisting solutions to this problem, and many of them are hosted.
The Magic 8-Ball says: Concentrate harder and ask again. :-)