The risks of not knowing
Yesterday I boldly asserted that a significant increase in form page visits which doesn’t yield a significant increase in form submissions … might still be a win.
Of course the gold standard for winning experiences is a statistically significant increase in a metric that’s tied to revenue - like form submissions.
But when your total form submissions are, say, 100 a month, you have to practically 1.5x the conversion rate to get significance.
And when you’re measuring microconversions higher up the funnel, you can get significance.
So “significant increase in microconversions, some increase in next step conversions” is a less stringent success criteria that should still keep you on track.
Here’s the tradeoff you’re making. When you get results with 95% statistical significance, you can say “If we ran 100 A/A tests, only 5 of them would show an effect this big.”
When you get results with no statistical significance, you have to say “The effect is not outside the normal range of fluctuation.”
Try saying it, then try adding “… but it’s in the right direction, and the lift in the microconversion is big enough to matter to us.”
If it comes out with sincerity and conviction, you’re good.
It is possible that you increased microconversions but had no impact on the conversion that matters. It’s even possible that you decreased the conversion that matters. (Both are possible with statistical significance, you’ve just got more evidence to the contrary.)
But within the window of time the experiment was running, that conversion rate went up, and stayed up. And you looked yourself in the face and asserted “this experience is an improvement over what we’ve got.”
Take the win, move forward. Pile up wins like these for 6 months or so and either they amount to a lift you can tie to revenue, or they don’t. At that point, you’ll know what to do either way 😁
I'd really like to email you.
Sign up and get a quick, skimmable update on what I'm writing / recording / building / thinking, once a week.