Insurance generally offsets low precision with higher premiums and a wide range of clients. 1 employee has a lot of variability but 100,000 become reasonably predictable.
Insuring against localized risk is an old hat for insurance, fire and flood insurance for example, and is generally handled by having lots of localities in the portfolio. This works very well for once-off events, but occasionally leaving localities is warranted when it becomes impossible to insure profitably if the law won't let insurers raise premiums to levels commensurate to the risk.
If the AI screws up, what do you need to fire/retrain? It seems like eventually the ai would get wrapped in so many layers of hard coded business logic to prevent repeat offenses that you may as well just be using hard coded business logic.
If the insurance company models loss-causing outputs as Bernoulli trials (i.e. each time the LLM is used, it is an independent and identically distributed event - equal chances of an error), but they are actually correlated due to information sharing, then that could make it harder for them.