Low traffic CRO
In the past, I’ve told a lot of potential clients that they don’t have sufficient traffic (or sufficient conversions) to run A/B tests.
At the time, I thought I was doing a service by discouraging them from investing in the wrong methodology.
I’m starting to rethink this stance.
Who has low traffic and conversions?
In this context, let’s say that < 500 conversions per month is “low.” (So, lots of businesses have low conversions.)
For some, it’s a temporary phase. They’re still finding their product market fit, or growing their audience. Their product line is expanding over time. Eventually they’ll hit the magical “500 conversions a month” mark and start experimenting.
For others, it’s a way of life. If you’re selling a high priced item or serving a tiny niche, you might be quite comfy at 15 sales a month, and see 20 as a stretch goal. You’ll never be able to run a 5-way A/B/n test that optimizes for sales.
Why do they want to do A/B testing?
Put simply, they want more conversions. And they’ve heard that A/B testing is a way to get that. (It can be.)
In addition, they probably want a data-driven approach to making decisions about their website, UX, and product.
Why do they want that?
More conversions are more conversions, whether they happened because you implemented the winner of an A/B test or because you changed the page based on reading tea leaves.
But with the former approach, you’re confident that you’re doing the right thing. And if you don’t end up with more conversions over time, you can safely blame the testing tool, seasonality, or the vagaries of consumer behavior. (You did everything you could!)
The tea leaves approach leaves you full of doubt. And if it doesn’t work out, the blame is entirely on the tea leaves reader. (WTF were you thinking?)
So the drive to do A/B testing comes from a desire to get more conversions by making changes in which you have confidence … all while maintaining plausible deniability if things don’t work out.
But low traffic sites can’t support A/B testing.
What can they do instead?
Can we bring hard, numerical data to low traffic sites - even in the absence of statistically significant A/B test results?
Or can we at least bring a decision-making method that increases conversions and instills more confidence than the tea leaves approach?
I think so, and I’m starting to experiment with some possibilities.
Here are some of the tools we can use:
- Session recordings
- User testing
- User interviews
- Polls and surveys
What I have in mind is packaging up a combination of these methods so that they yield quantitative data that feeds into a decision-making process.
Because even the most damning user testing session imaginable is still just, like, someone’s opinion. Because every heatmap is based on a slice of a slice of total traffic.
Jumping to action based on a single observation is not much better than reading tea leaves. But when multiple data sources point to the same problem or solution, ignoring them is tea leaves territory.
Because data of every type is so easy to twist and distort, we need a plan for how we’ll interpret our data before we start.
We’re all biased. If our plan is “see what the data says,” there’s a good chance we’ll see what we expect to see.
So the heatmaps and scrollmaps will yield scores for page elements. The session recordings and user interviews get coded for specific actions, experiences, and problems.
And this data gets combined with the survey data according to predefined weights based on our sample size and confidence for each data source.
If anyone out there is doing similar work, using data from additional research methods to supplement or replace their A/B test data, please hit Reply and let’s chat about it.
If you’re not doing that, but would like to hear more, let me know.