Talking to real customers and helping them solve real problems is really potent. And you can get more than just the color of a button. You can get the direction your company needs to go for months.
I think part of the problem is that science takes too long. It's like waiting for evolution to play out. You're company is at war with everything, entropy, the economy, your competition, the attention span of customers. Do you have time to science your way to success? Probably not. Do you have time to gamble on your intuition? Barely.
Collecting data isn't bad per se. But you should always be asking yourself if you are solving the right problems before you waste your time on it.
This, this and THIS again !
Example case (of many I could cite) would be Transferwise.
They used to be good, but now they've denigrated into a quagmire. Could they be bothered to talk to their customers, or even just send round some box ticking surveys, they might find that out.
No amount of A/B testing, data lakes or other "data science" buzzwords is going to help them.
But no, instead its the same old story :
Rebranding from Transferwise to Wise because, well, I guess that's the usual shit companies do when they've run out of ideas (Aberdeen rebranding to Abrdn is another fine example from the financial sector).
Doing stuff worse because it benefits the business (read: increase margins) rather than the customer. Transfers take forever. Customer service is non existent.
Funnily enough it all seems to have started going downhill around the same time they floated on the stock market. Funny that !
Whilst I am aware that a company's strict legal definition is to put its shareholders first, it doesn't have to be that way, at least not in a blatant manner. Afterall, disgruntled customers don't do much good for shareholder's pockets.
You should be using data to invalidate your assumptions, separate the real from the perceived, and to draw those aha moments mentioned in the article. Then use that to prioritize and decide what’s worth iterating on and when its good enough to move on to bigger problems.
As the article says, data won’t tell you everything, which is why your data people need to also be product people, and not just sql monkeys or phds in a backroom doing analyses nobody will understand or read.
I suspect there’s a disconnect here where you are talking about smaller, early stage companies. A lot of the time they don’t have the sample size to do proper AB testing, or the resources to do it properly, and they have less to lose. So shooting from the hip is more likely to be the only reasonable choice.
You can do it, but you can generally only test really large changes, and often if you have good customer communication you can pick up on what the change means with some interviews and showing the customer(s) what the new thing looks like.
This is generally much faster and cheaper (and I say this as someone who adores designing, running and analysing AB tests).
One of the many issues is you only get tot talk to customers willing to honestly talk to you.
That means you can't hear from potential new customers you wouldn't know were part of your market. You also don't hear from customers who would want to leave you but just haven't put it into words yet.
A/B testing helps get more insight into what customers actually do (and not what they tell you) and also get numbers on how big of an impact your changes have. The time to wait for the results is insignificant compared to the impact of ill changes in general.
In the late 2000s I was part of a team that was developing some pretty incredible software to help chip designers manage the added complexity as features got smaller (context: our customers were freaking out about how hard it looked like 45nm was going to be). We did all of these customer satisfaction surveys and shit like that and got... some decent feedback but mostly just all rainbows and unicorns positive reviews.
Chip design software is complex and every customer of ours needed some custom integration, which is where my small group came in: the three of us were dual-degree EE/CS folks. We could sit down with the chip designers and understand their workflow and then go back to our hotel room at night and write the integration code to connect our tool with whatever bespoke workflow they had internally. All of that story leading up to the main point:
The feedback I got talking to random people outside in the smoking area was dramatically more valuable than anything we got from our customer surveys. This wasn't a strategy, I'd just go out for a smoke every hour or two to smoke and there'd usually be a couple of employees out there doing the same. "Hey, I don't recognize you, are you new?" "Oh, no, I'm here helping with the $X integration" "Oh! Hey so maybe you can help me then... in the latest release it looks like feature $X should be able to do $Y but I can't seem to get it to work..."
Pretty much every time I went outside I ended up learning something new, either an interesting way our software was being used or misused, or some other detail about how these guys' day-to-day workflow worked that we hadn't even thought of addressing.
We had some customers in Japan, too, where there's a an interesting social hierarchy when having business meetings. Me and the junior engineer across the table couldn't talk to each other directly in the meetings, all of the questions had to go through my manager, and a translator, and a senior manager on the other side of the table... in a big game of telephone even though we were in the same room. After the meeting I would usually go have a smoke and just happen to find the junior guy from the meeting doing the same. "You know, I do speak English... and have a few questions if you don't mind me asking directly" :D
While I can't recommend picking up a persistent nicotine addiction for doing better user research, I also can't say that I've ever encountered a more organic way to get really good unfiltered user feedback. Surveys, user studies, focus groups, etc... they're all decent tools to varying degrees but don't always get the level of honesty you can get out of someone sharing 5 minutes with you in the smoker's corner.
Data can be very helpful though. We pull data from the public company records which show earnings to find possible investors. Then we combine that data with our sales data from HubSpot and Microsoft CRM (don’t ask me why we have both) as well as our internal sales systems. Which provides good data points for our sales department when deciding which potential investors to focus on, and shows them how much they’ve already “bothered” people. 10 years ago all of this was basically done by hand, now it’s mostly automated. Which sucks for the data researchers, but since the majority of those used to be unpaid students who now get to actually work on something more related to their studies, it’s mostly a win-win.
Where data doesn’t really help us is in marketing. Exactly because it’s showing us the past, and while that can be useful, it often hasn’t been very helpful in deciding how to do future campaigns. I imagine a lot of this is also true in other fields which produce content for human consumption. I guess in some areas it will be, but on most “creative” fields the data won’t necessarily show you what people will find “fun” or “interesting”. I think Hollywood, big gaming companies as mentioned in this article and others are sort of struggling with this. Thar is just my guess though as I only have experience with how our marketing department has come to the conclusion that while data is a good measurement of the success of various initiatives it’s not very useful in helping decide what sort of campaign to run next outside of which channels are the best focus, and even then, that also changes over time.
Then you have people that try to get greedy. On more than one occasion I have designed a test where two variables change, results are great, rollout projections are great, the stakeholder attempts to do the rollout without changing the variable that creates incremental expense, and the rollout does not meet projections. Then they reluctantly do what they were supposed to do in the first place and everything is fine.
How much faster can we process payments through provider A versus B in different countries around the world?
If we offer insurance after checkout, do we convert more than offering it before?
What ranking algorithm of skus leads to the highest conversion?
Does a customer care if you shave off 2% of the final cost or do they care about having world-class customer support?
Does a customer care if a product has a higher conversion rates or if it is the product they were looking for in the first place?
Does a customer care more about how long payment processing takes or do they care that it takes their local mobile payment app?
The only way to know is to talk to customers. Without doing so you’re just coloring a different variety of button.
- being real
- talking to customers
- actually listening to what people say
- using intuition
- cutting to the chase (big and meta problems first)
- doing risky exploration for abductive reasoning
is only as good as the nominal culture we're in. As the author says,> I’m no longer a believer in decision-by-spreadsheet.
That's nice for you. Me neither. But every day we must interact with dull-headed data crunchers who set the pace and policy.
Relying on such numbers, however, is equivalent to falling back on intuition and gut feelings for decision-making (or worse), while believing that the decisions were based on numbers.
Not because it finds new knowledge, but because it keeps your product teams honest.
It's really easy to delude yourself and others about your project when your promotion is on the line, and A/B tests let you actually evaluate whether the change helped or not.
At small companies, you're not trying to find 2% effect sizes, anything that small is already a failure, so you don't need statistics to tell you what worked.
As Warren Buffett likes to say - "It's better to be approximately right, than to be precisely wrong"
This should be a poster in every company doing any kind of data.
maybe this is the difference between a business (or engineering) mindset of "it must work effectively, how and why it works are secondary"
in contrast with a perhaps more phillosophical (scientific? purely mathematical? reverse engineering?) study goal? in which case whether something works is secondary to having a full theory of what's going on
I watched that company converge on the blandest, clunkiest, least useful features over and over and over again.
Blindly trusting the data without any product vision is just design by committee at scale.
Hypothesis testing is a great way to discover laws.
That is to say: Data can certainly advise you what not to do. Such as flying the ship into that spooky nebula, Captain
> Spend time where your customers are and make your own conclusions.
This is a great article, very well-written, and I enjoyed reading it. However, could intuition and spending time with customers be considered another way of collecting data points to inform data-driven decisions?"
Anyone can plug numbers into a formula (there are so few barriers to doing that, that most people probably do it kind of wrong and get approximately directionally-correct results anyway), but handling qualitative data requires really knowing what you're doing from first principles.
And all of these come from data. There is no non linearity here; just widening of perceptive funnel.
Then there is politics driven development. You want to do something and search the trove of data for data that supports what you are doing. Or you look at the data in a strategy meeting, and then ignore it (seen this happen mostly in board meetings of large companies)
Another thing to say is not all companies should or can be vision driven, some companies are just that copy cat.
Where you have enough users (maybe you're a major online retailer), A/B testing should be a vital part of your toolkit. Not the only tool, but you definitely need to test every change you make. If you can gather enough data within 24 hours, why wouldn't you test your change?
That being said, A/B testing isn't the be-all and end-all. It just gives you some information to make a decision. You still need to know your customer, speak to them, survey, observe, etc. You might even pick a "losing" variation with the aim being to reach a more optimal business outcome. Data doesn't give you the right to abdicate your responsibility to make good decisions.
There are cases where A/B testing can't help at all. A great example is in low-volume but critical flows (think SaaS conversion funnels). For these, you need to rely on the other skills you have at your disposal.
For instance, you are making a pretty advanced 3D web app, and notice in your analytics that your userbase is only chrome and safari users.
An easy conclusion is to focus your testing on those two platforms, or maybe even drop support entirely for Firefox and Edge by using some webkit specific API.
A not so easy conclusion is the experience might be so bad or buggy on a non-webkit browser that anyone who tries the app in those just gives up on it.
The reasonable truth in this case? You should use standard browser distribution except if you're operating in a specific market, it might also be perfectly fine to drop non webkit browser if the ROI of developing them is not worth it for your goal etc. All of which does not need data but rather intuition and common sense.
People learn over time that complex web apps work badly in Firefox, because developers mostly test in Chrome.
So they don't even bother trying it in Firefox.
What is the value of that?
Language alone, or in this case, information, does not dictate our actions. However, there is persuasive power inherent in language — specifically, language that exposes the subjective gains individuals aim to achieve through their actions, often influencing individual behavior.
There exists an unexplored connection between our contemporary understanding of data and praxeology.
But analytics/diagnostics are extremely important to discover bugs, because you can't rely on customers to tell you about them.
Data-informed decisionmaking is great. Data-driven decisionmaking, not so much. You still need to trust your gut.
Data-ignorant decision is a killer, too.
Data also provides valuable insight in a negative manner. If your conversion rate is abysmal, the data tells you to get out of your cubicle and start talking with real customers to find out what the data isn't telling you. It is still a data-driven decision. It is just a negative one.
However, in the end, data isn't going to find your next billionaire dollar opportunity. You need to find a gap in the market that no one has tackled before, and of course there is no data for, otherwise someone else would have jumped on it.
Data informs your values, but your values are a choice. Already from the beginning, data will never define your values. Even with perfect knowledge, your decisions are still going to be a choice. Combine that with the fact that our knowledge is imperfect (our data is incomplete, biased, a single and partial perspective)...
"A company should seek to maximize its profits," is a normative statement, not a truth. It is a choice of values.