I can get why we do satisfaction scores, but NPS never made sense to me. Like, does the justice system do NPS for family law cases? It's not like you can get divorced in a 7-11 by the clerk, and even mediation is a process inside the model so NPS in a non competitive self-regulated monopoly.. what does it even mean to recommend to others?
I will give you a real-world example. My girlfriend works for the government and they have an initiative to do outreach to people in the local community and let them know of all the funded help available to people for different situations.
However, they can't go to all 30k+ households at once and even in targeted areas (which is the way it works now) it takes a very long time to get to everyone.
The thinking about a recommendation is that perhaps they go to a home and let the person know about all the free programs available and leave a list with phone numbers. And this person might not need that information at the moment, but maybe they know someone in their family or from church or elsewhere that could use it and they let them know.
I do understand that if you help people who have lost their home due to fire it is silly to do a survey and ask them if they would or did tell someone else in case their home burns down. However, it isn't across the board silly even or not for profit cases.
Maybe you are in the wrong crowd, but the crowd that does do recommendations on operating systems it is rarely windows that is recommended
https://corecursive.com/software-that-doesnt-suck-with-jim-b...
Asking "How did you learn about us?" is a question that helps evaluate your marketing and sales pipeline. Asking "Would you recommend?" is about helping your product development process. Your sales / marketing team can't effectively influence the NPS (only thing they could do is divert marketing from customers that wouldn't be satisfied by the product).
I still think that it is useful to track actual reccomendations when that is expected to happen.
The other thing the article misses IMO is that detraction is also growth, albeit negative growth (and additionally, people are in my experience much more likely to passionately recommend _against_ something they hate than recommending something they love). So the NPS tells you a thing or two about: - Your potential to utilise whichever chance you have to grow via word of mouth - Your potential to squander that same chance due to people hating your product - Your potential to have negative growth because your customers are leaving in droves.
Around 16 years ago our CEO was talked into using this. After 3 quarters of using it, he killed it because it wasn’t making any visible change in sales; the only metric he cared about.
Since NPS and changes in NPS is measured by surveys, not attributing individual sales, it's a KPI that's gameable (especially when used on its own); near-impossible to tell whether "improving" NPS in some existing-customer segment from '5' to '7' results in anything tangible. On the other hand you can actually measure sales and attribute which channel they came from.
Amazon is not selling customer service as a separate product, yet that's the product that most customers really rave about, even though they don't really like anything else about Amazon.
Most places don't care about the results from an actual customer service perspective. The above gets crickets, not even an auto responder.
For companies that do care (tiny startups, mostly) I've gotten IMMEDIATE personal email responses from CEOs and founders asking what they can fix for a zero NPS. That's a great place to link the criticism section if not done previously, and to provide useful, raw feedback on what you love/hate about their products.
(I do advocate for laws against arbitrary firings and encourage employees to unionise and/or move to jurisdictions with strong labour laws).
From when I worked at a company that used it I seem to recall it was actually just used as binary too. 8+ is good and everything else is bad or something like that. So it’s weird that they collect it with such fake precision.
Would I recommend a product to others? Yes. Does it ever come up in conversation? No. Do I go around telling random people about this product? No.
Previously we had access to all the freeform comments, such as "as a customer we need feature X", or "as a staff member we want to see more transparency around Y".
Today, after a few particularly turblent quarters including layoffs, all we get to see are summarized versions of the staff NPS.
Vanity project indeed.
Expand your initialisms, folks.
So you get a zero even though you're product is great.
Ask the right question!
First, you start by assuming your customers can even reasonably ascertain their likelihood to recommend. They can't; there are people who answer 10 but will never recommend, and there are people who answer 0 but already have and will again.
Next, you assume your customers are idiots and don't know how an 11-point scale works by adjusting the midpoint: Instead of 5, the middle is now 7 and 8.
Then you realize there are two many numbers, so you throw several out by reducing your 11-point scale to a 3-point scale, after which you re-interpret "unlikely to recommend" as "likely to snag some other customers on my way out the door."
Finally you calculate your 'net promoters' by subtracting the percentage of low scores from the percentage of high scores to give you a nice round number that doesn't correlate with what's actually happening in the real world.
And this is just what happens when you do it 'the right way.'
NPS is said to measure growth using loyalty as a proxy. But then, what does that have to do with recommendations? Nothing.
The startup never became profitable and ran out of investor money 18 months later.
There were many things wrong with the company but this was one of the things that made me most feel like I was in Office Space.
1) Even with stable mean and median, NPS tends to vary month over month, at least for my B2B settings where samples are probably much smaller than for B2C. Then, management goes nuts because of very subtle shifts in the distribution caused by NPS' arbitrary aggregation into promoters, neutrals, detractors. Of course, often investors are married to NPS, so educating management does not solve the problem.
2) NPS varies unreasonably across cultures. We used to say, somewhat tongue-in-cheek, that NPS is a US-centric metric, where things are either amazing or awful (with little space in between). E.g., in northern/central Europe, an 8 can be pretty amazing.
> They can't; there are people who answer 10 but will never recommend, and there are people who answer 0 but already have and will again.
What matters with NPS is trend over time, and getting the numbers at a scale. Yes, there are people who randomly click on one end of the scale or the other, but the assumption is that on average the portion of these people is stable.
> Next, you assume your customers are idiots and don't know how an 11-point scale works by adjusting the midpoint: Instead of 5, the middle is now 7 and 8.
This does not come from the assumption customers are idiots, it comes from the idea to treat people who vote "in the middle" not as neutral, but as detractors. Which makes sense: If someone tells me "hey I know Product X and it's meh", then I'm less a promoter but more a detractor.
> Then you realize there are two many numbers, so you throw several out by reducing your 11-point scale to a 3-point scale
The 3 point scale was the goal all along though, it's the idea of an asymmetric scale that leads to the 11-scale to 13-scale reduction.
> after which you re-interpret "unlikely to recommend" as "likely to snag some other customers on my way out the door."
If your assumption is that promoters drive positive growth, it's fair to assume that detractors drive negative growth by recommending an alternative. If you believe in that core assumption that NPS measures word of mouth, then this interpretation of "likely to snag some other customers on my way out the door" is a sensible one.
> NPS is said to measure growth using loyalty as a proxy. But then, what does that have to do with recommendations? Nothing. I don't think the underlying assumption is bad. That's how influencers work: people are more likely to buy something that is being recommended to them by someone they trust and someone who is passionate about the product.
Does NPS work? I don't know - I'm not using it as I said above. But at least the assumptions under which NPS are designed on top of the idea of word-of-mouth as a growth diver seem solid to me.
That assumption is predicated first on the idea that people can and will tell you their likelihood to recommend within a reasonable degree of accuracy. I don't think they do.
1: https://www.xminstitute.com/data-snippets/gap-consumer-recom... 2: https://hbr.org/2019/10/where-net-promoter-score-goes-wrong
> If you believe in that core assumption that NPS measures word of mouth, then this interpretation of "likely to snag some other customers on my way out the door" is a sensible one.
But that's not the scale given to the respondent. The scale is given as going from "not at all likely" to "very likely" to recommend. There isn't an option for likely to recommend against. The low end of the scale probably captures some, but to assume it is near 100% is a mistake.
Like, I've seen these for years. It feels like 5 seconds after I open any given website a "rate us on 1-5" popup shows up right in the middle of the screen. I assumed it was just some thing that's automatically thrown in just to annoy users and has no practical purpose, like the cookie warnings (that are ignored regardless of what you select), the email spam requests (which nobody reads despite what people claim [if someone out there wants to get angry and claim their daily spam emails and annoying popups do good for their business, go ahead {I will laugh at you}]), and the "subscribe/follow us on (social media)" that plague every site these days.
Knowing some manager is thinking this is valuable info and this may decide their job is just hilarious. I used to randomly pick numbers just to dismiss it but now I'm motivated to actively mess with them.
It's the same thing with regular product purchase. You get these "would you recommend?" or "would you purchase again?" solicitations either immediately after receipt of the product or within a few days, or solicitations to go leave a review somewhere. How am I supposed to rate something I've owned for somewhere between minutes and days and have barely used? Many of my most disappointing purchases seemed like great things when they first arrived or I first put them together, but after prolonged usage they showed serious flaws or simply stopped working.
Clearly the less fuzzy way to define it is "How many people have you recommended this product/service to in the last n months?" not "Would you, if asked?"
Also, whether customers talk to each other (directly, privately) is not the only thing, there are online reviews, resources like HN, etc. And it's always hard to tell what's compensated or not, e.g. Snowflake's "grassroots" testimonial campaign on LinkedIn, testimonials etc., with the huge budget.
Astrology isn't about stars. It's about cold reading if done in person, and about the art of writing descriptions that sound specific but actually apply to pretty much everyone.
NPS is a way to reduce a histogram to a scalar.
* when should we plant this crop?
* how do I generate a random number if literally any human bias is worse than a random choice?
* were you the oldest (or youngest) in your class at school?
That is the only redeeming quality of any of them, however.