If the system works, and is fast and efficient enough for the task (which is clearly is), and supports the needed features (there was an article on the things you could do in COBOL a while back that went over this), then what is the benefit?
Sure maintaining something is not as "fun" as writing a new thing, but that doesn't make it better.
Anyway, the "obscure mainframes" isn't true - yes there are cases where the easy path was simulate to emulate a PDP on modern hardware - but there are, for example, COBOL implementations the target .net, in addition to regular unix and windows targeting compilers.
Edit: and I forgot my original reason for replying: the use case for mainframe software is drastically different from consumers.
The big scary mainframe software is generally designed to be essentially a few users, running on a restricted set of homogenous hardware and system software. They are generally specialized so that's all they have to do.
Consumer software is not as forgiving - generally a user expects software they bought 20 years to still work, and likewise will assume that their 10 year old machine should still be able to run new software "because it still works". Look at the commentary on Apple deprecating 32bit software. Or the complaints about Windows N breaking legacy software, while also laughing about the things MS does to keep old things running.
I am not suggesting rewriting to use new shiny things. If the platform is deprecated (32bit, mainframes, windows98, C89, etc) you migrate your code when it is still possible. Customers can still use an older version of your software is they are stuck in the past but you are not compromising future of your product.
This may be less fun for developers, but it's not usually a bad decision.
All engineers love rewriting everything, because that's human nature: "I can totally do this better than the last person".