> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management. We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably. We plan to do with 100 people what Allianz and others do with 100,000.
Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job, I see one very concrete moral problem:
that the only way to provide dispute resolution and customer service to 1B people with only 100 employees is by depriving them of any chance to interact with a human, and forcing all interaction with the company to go through AI.
That, to me, is deeply disturbing, and very very difficult to justify.
Real world evidence supporting your argument:
United Health Group is currently embroiled in a class action lawsuit pertaining to using AI to auto-deny health care claims and procedures:
The plaintiffs are members who were denied benefit coverage. They claim in the lawsuit that the use of AI to evaluate claims for post-acute care resulted in denials, which in turn led to worsening health for the patients and in some cases resulted in death.
They said the AI program developed by UnitedHealth subsidiary naviHealth, nH Predict, would sometimes supersede physician judgement, and has a 90% error rate, meaning nine of 10 appealed denials were ultimately reversed.
https://www.healthcarefinancenews.com/news/class-action-laws...
This is a fantastic illustration of selection bias. It stands to reason that truly-unjustified (some hidden variable) denials would be appealed at a higher rate and therefore the true value is something less than 90%.
That's not to say UHG are without blame, I just thought this was really interesting.
feature, not bug
working as intended, closing ticket
[1] In the sense of "it doesn't matter if it caused the problem", rather than "it probably didn't have any effect". Because after all, "to err is human, but to really foul things up takes a computer".
For the points you brought up, why is stagnation for the purposes of upholding an ethical position a bad thing?
And yes, by definition, worrying about ethical responsibility would lead to ethical issues. That's the whole point.
Dealing with a machine is unlikely to be worse.
But it was already the case that they just arbitrarily do WTF ever they want, that outside a small set of actions that "bots" can perhaps handle fine they aren't going to do anything for you, and that the only way to get actual support for a real problem involves something being sent from a .gov email address or on frightening letterhead.
So... not really any different? You already basically have to threaten them (well, have someone scarier than you threaten them) to get any real support, this wouldn't be different.
It's a common problem with automation - the focus is often on accelerating the 'happy' path, only to realise dealing with the exceptions is where the real challenges lie.
One tried and trust way around that is to cherry pick customers as part of your strategy. You sell insurance to people who will never claim ( and hence dispute), and shun those likely to.
However such market segmentation results in no insurance for people who would need it and the people who don't wondering why they are buying it - ie optimal efficiency for an insurance company is to simply offer no value at all.
ie you could argue the whole value proposition of an insurance company is to pool, not segmented risk, and critically to provide fair arbitration ( protecting the majority of the pool from those that would do insurance fraud, while still paying out ).
Buying 'peace of mind' requires a belief in a fair dealing insurer - that's the key scale challenge - not pricing or sales.
2) Having many firms serve a market is always better for consumers as well instead of a single firm. (with a few notable exceptions)
3) In terms of large scale, its impossible to scale efficiently across countries as you navigate new political and economic structures.
That said I suspect the founder is seriously overestimating the number of highly intelligent, competent people he can hire, and underestimating how much bureaucratic nonsense comes with insurance, but that's a problem he'll run into later down the road. Sometimes you have to hire three people with mediocre salaries because the sort of highly motivated competent person you want can't be found for the role.
Respectfully, no it can't. From a Western perspective, specifically American, and from an average middle-class person's perspective, specifically American, it only appears to be fair.
However, LLMs are a codification of internet and written content, largely by English speakers for English speakers. There are <400m people in the US and ~8b in the world. The bias tilt is insane. At the margins, weird things happen that you would be otherwise oblivious to unless you yourself come from the margins.
I don't know. Given the human beings I've interacted with in customer support, and the number of times I've had to escalate because they were quite simply "intelligence-challenged" who couldn't even understand my issues, I'm not sure this is a bad thing.
In my limited experience with AI agents, they've been far more helpful and far faster, they actually seem to understand the issue immediately, and then either give me the solution (i.e. the obscure fact I needed in a support PDF that no regular rep would probably ever have known) or escalate me immediately to the actual right person who can help.
And regular humans will stonewall you anyway, if that's corporate policy. And then you go to the courts.
And right now, the LLMs aren't really that smart, they're making up for low intelligence by being superhumanly fast and able to hold a lot of context at once. While this is better than every response being from a randomly selected customer support agent (as I've experienced), and when they don't even bother reading their own previous replies when the randomiser puts the same person in the chain more than once, it's not great.
LLM customer support can seem like a customer win to start with, when the AI is friendlier etc., but either the AI is just being more polite about the fixed corporate policy, or the LLM is making stuff up when it talks to you.
Author ignores the fact that any normal market there are variously priced insurances, yet somehow not all people flock to the cheapest one, in contrary (at least where I live). Higher fees mean ie less stressful life when dealing with insurer.
> Completely separate from the potential ethical issues and economic implications of putting 100k people out of a job
Less work is... good ? Ethics are positive here. More work, more pain
That's a huge assumption that has no supporting evidence.
> Less work is... good ? Ethics are positive here. More work, more pain
No. Work allows people to earn money and survive. Ethics are not obviously positive. Up for debate, but this is not the place.
Economic productivity putting people out of jobs is both good and necessary and it is unethical to work against it.
The way I've come to think of the current moment in history is that capitalism allocates resources via markets and we use this system because in many situations its highly efficient. But governments allocate resources democratically exactly because we do not always want to allocate resources efficiently with respect to making money.
Whether it "makes sense" or not, most people believe there is more to life than the efficient allocation of resources and thus it might be a reasonable opinion that making 100,000 people suddenly unemployed is bad. I doubt seriously that the OP believes having 100,000 people working indefinitely when the labor can be done more efficiently by machines is good. I think most reasonable people want to see the transition handled more smoothly than a pure market capitalism would do it.
What part of that is suffering, if it enables 100k constituents to put food on the table?
There's a huge assumption in your comment -- that having 100,000 employees necessarily guarantees (or even makes likely) that you will have some human to help you.
More likely, those 100,000 humans are mostly working on sales and marketing, and the few allocated to support are all incentivized to avoid you, and to send you canned answers. A reasonably decent AI would be better at customer support than most companies give, since it'll have the same rules and policies to operate with, but will most likely be able to speak and write coherently in the language I speak.
Insurance isn't like a widget. People have actual legal rights that insurers must service. This involves processing clerks, adjusters, examiners, underwriters, etc. Which then requires actual humans, because AI with the pinpoint accuracy needed for these legally binding, high-stakes decisions aren't here yet.
E.g., issuing and continuing disability policies: Sifting through medical records, calling and emailing claimants and external doctors, constant follow-ups about their life and status. Sure, automate parts of it, but what happens when your AI:
a. incorrectly approves someone, then you need to kick them off the policy later?
b. incorrectly denies someone initial or continuing coverage?
Both scenarios almost guarantee legal action—multiple appeals, attorneys getting involved—especially when it's a denial of ongoing benefits.
And that's just scratching the surface. I get that many companies are bloated, and nobody loves insurance companies. No doubt, smarter regulations could probably trim headcount. But the idea that you could insure a billion people with just 100, or even 1000 (10x!), employees is just silly.
That's not an assumption.
I know that I, and many others, have been able to get a human on the phone every time we needed one. Regardless of the number of those humans actually working claims, in the current system, it is "enough".
I also know that it's impossible to give that level of service when you have 1 employee for every 10 million customers.
That's really all that you need in order to make the judgement that you're not going to get a human.
Side-note: I did a quick search, and found that Allstate has 23k reps that actually handle claims and 55k employees total, so almost half of their workforce does claims and disputes. They also have 10% market share of the US's ~340 million people, so that's, at most, 1 rep per 1500 employees. That's much better odds than 1 for every 10 million.
> A reasonably decent A
And there's the problem - that AI doesn't exist. You're speculating about a scenario that simply hasn't been realized in the real world, and every single person that I've talked to who has interacted with an AI-based "support representative" has had a bad experience.
https://worldpopulationreview.com/countries/deaths-per-day
So actually 100,000 employees put it surprisingly close to just having one case handled per day per employee.
Of course, a ton of people don’t have life insurance. And also, a lot of deaths are pretty straightforward.
The Catholic church has 1B "customers" and seems to be doing ok with human-to-human interaction without the need (or desire) for AI. They do so via ~ 500K priests and another 4M lay ministers
For the record, that strikes me as seriously improper. Life insurance is a heavily regulated offering intended to provide security to families. It is the opposite of bitcoin, which is a highly speculative investment asset. Those two things should not be mixed.
Also, the fact that the disclosure seems to limit sales to being only occurring in Bermuda seems intentional. I suspect that this product would be highly illegal in most if not all US states, so they must offer this only for sale in Bermuda to avoid that issue.
> You can borrow Bitcoin against your policy, and the borrowed BTC adopts a cost basis at the time of the loan. So if BTC were to 10x after you fund your policy, you could borrow a Bitcoin from Meanwhile at the 10x higher cost basis—meaning you could sell that BTC immediately and not owe any capital gains tax on that 10x of appreciation
Neither Meanwhile Insurance Bitcoin (Bermuda) Limited nor its affiliates Meanwhile Services (Bermuda) Limited and Meanwhile Incorporated, are lawyers or accountants. They do not provide legal or tax advice. You are wholly responsible for obtaining your own legal and tax advice.
And everything incorporated in Bermuda, and regulated only by Bermuda laws, makes it very impractical as an insurance (go and claim whatever you want against them from your country, I don't think it will be easy) and very obviously a tax evasion thing.You can take the founder out of a consultancy, but you can't take the consultancy out of the founder.
The junior guy started crying in the meeting. Like just blubbering. My wife still feels bad for it but still…
Weird thing, instead of firing him McKinsey kept him and stipulated that he can only be in meetings when the partner is present.
Get at least a few years work experience and call me. Or alternatively, start your own dang business if you are really that smart.
The person making the recommendations isn't just out of school. They've been at the firm for years, and do have a ton of experience.
The recent grads are there for all of the grunt work -- collecting massive amounts of data and synthesizing it. You don't need years of business experience for that, but getting into a top college and writing lots of research papers in college is actually the perfect background for that.
1. This BA/Asc was on <4 hours of sleep, maybe many days in a row
2. They walked into that meeting thinking they had completed exactly what the client (your wife) wanted
And after the meeting (this I feel more confident about, as it happens a lot)
1. A conversation happened to see if the BA/Asc wanted to stay on the project
2. They said yes, and the leadership decided that the best way to make this person feel safe was to always have a more experienced person in the room to deal with hiccups (in this case, the perception of low quality work)
Isn't that... good? What else would you expect
Why would they fire him after a singe incident?
Sounds like McKinsey is a more companionate organization than you, and that's saying something:)
They essentially lied about any anticipated KPI potentials and let their "tech" people put together a 15k EUR/month (before public release) platform on AWS which was such a pile of mess, it made the second year's CTO start from scratch. After some heavy arguments because of their poor performance, McKinsey agreed to let some "non-technical" people work there for a couple of months for free. All arguments you'd had with the McKinsey "Engineers" felt like talking to AWS Sales, they had barely any technical insights but a catalog of "pre-made solutions" to choose from.
It is not malevolence but more a deficiency by design. First as you said, most consultants are not real "leaders" or "tech leaders" and when you start to get experience, you leave consulting or you raise the hierarchical ladder to become a more senior manager, that spend more time finding contracts, negotiating, dealing with the customer proposals and renewal than doing the actual job.
In the end, you have juniors that are doing the tasks pretending to be sector experts or like the guy in the article, you are propulsed "CEO" of the big entity of a big corp just after 4 or 5 years of basic consultant experience, not even having worked in real non-conducting job.
In the end, when you buy a mission to such a firm, most of the cost goes to structural costs and daily rate of a chain of useless parasitic executives like directors, executive partners, vice president, that will spend 10 minutes per month reviewing slides on the project. The consulting doing the job will be paid at most the double of what a good freelance can expect.
And regarding the spirit, the fun thing is that even when you do bad or evilish things, there is kind of a mental block brain process that makes you truly believe that you are doing an useful and much needed work. Despite it not the case.
In my own case, often I had this bad feeling deep in. A background that a customer that we were negotiating a mission for millions, could have just hired a decent developer for just a few thousands euros and have quite profited from a most good and successful system.
But like, in Matrix, when you are inside the system, it is hard to consider things outside the box.
In the same way that again in the example of the blog post, I'm quite sure that the big company would been able to find hundreds of existing employees that would have fitted the bill well and better for a lot cheaper.
Having worked in highly regulated industries, I’ve learned that the best way to disrupt incumbents is by creating a product that assumes more business risk than is typically accepted. Large, regulated companies are extremely risk-averse—so if you can take on that risk in a smart, innovative way, you’ll win.
The fact that the company has become a sort of pseudo-VC (mentorship but not financing) for small teams within megacorps is interesting. I wonder why large corps find it so difficult to innovate. I think that they become somewhat "load-bearing" in society and the lines between the company and the market begin to blur. Any change the company makes causes a misalignment because they shaped the market to fit themselves.
Now that I'm working at a big organization (a Fortune 500 company), I can relate. I'm by far the most innovative person in my team and I'm being held down because I'm not doing my role (as I'm not a dev but a data analyst at the moment).
If I'd be doing my role however, then we wouldn't be innovating and the C-suite wants us to innovate with AI. I'm the only one at my department that can create actual AI automations. And the IT department is basically stripped out by upper management.
If anyone wants an actual dev building AI automations and think how we can disrupt with the state of the art, my email is in my profile.
So 3 years at McKinsey taught OP the corporate BS. That paragraph doesn't say anything useful.
> Our vision at Meanwhile is to build the world's largest life insurer as measured by customer count, annual premiums sold, and total assets under management.
We think we can be bigger (more customers, more sales, more money) than all existing players.
> We aim to serve a billion people, using digital money to reach policyholders and automation/AI to serve them profitably.
We're looking to eclipse the population of any one country and we're going to use something like Bitcoin to side-step national currencies (and maybe also to avoid existing regulatory structure, not clear from the ambiguous language).
> We plan to do with 100 people what Allianz and others do with 100,000.
We believe we can automate or use AI to eliminate the need for people to actually support these billion customers.
All three of those are very bold statements/goals.
Some of them off the top of my head are number customers, number of active policies, premium amount, assets under management, time to claim resolution, etc. He's talking to business people who understand the insurance market.
I really think every founder (and startup worker) needs to take seriously the marketing side of the business, and not just believe that new technology will win.
(While I, too, am allergic to bitcoin scams, given increasing levels of political corruption monkeying with markets, rates, and regulation, I can also see it as an enticing alternative for those looking to get long-term investments off the dollar. For insurance, the main question is, will the money be there and be made available? Having seen even highly-regulated pensions fail (without federal insurance recourse in the case of religious hospital behemoths), I can see how technical guarantees independent of regulation or law could be compelling.)
> And though when we started our business in 2023 (ChatGPT wasn’t out yet), you could begin to feel that something like that was possible in a way it wasn’t before.
perhaps a typo in year?
The key phrase is LIFE Insurance, not HEALTH Insurance!
They are vastly different markets.
You don’t deny claims for life insurance as companies would do for health insurance. It’s a very different set of circumstances to have to deny life insurance.
Somewhat ironically, over the past 20 years I’ve come to reject PhD-type career tracks after seeing how much PhD overproduction there is and how my older colleagues only had a BS or MS. These days, I yearn to leave my Big Tech job to start a “boring” business. Right now I’m taking Accounting 101 at a local university to understand business financials better.
Very much a symbiotic vs parasitic relationship.
Unless you can solve that part of the problem as well as the big players, you will run into problems at some point, using extrem value theory you can even estimate when.
In Crypto this mean rugpull time
My understanding of this reputation is: This often happens at the detriment of either product quality or employee satisfaction. It's debatable if they actually have a reputation of providing value. I think short term? Maybe, albeit expensive. Long term? I'd say no.
one nitpick:
> And though when we started our business in 2023 (ChatGPT wasn’t out yet), you could begin to feel that something like that was possible in a way it wasn’t before.
ChatGPT launched in late 2022...