I’m working on a CRO glossary (exciting details soon). Let’s look at the first entry – what it is, and how to avoid it.
What is Analysis Paralysis?
It’s the tragic state of affairs when you’ve run an experiment, gathered data, and can’t make a decision based on it.
What causes it?
Too many metrics
You’ve measured conversions, average order value, pages viewed, time on site, and six others. Conversions are definitely up, but … so is bounce rate. Surely we don’t want to increase bounce rate, right? We better run this test again.
In this case, your paralysis is only compounded by the multiple comparisons problem.
Your test results are conclusive and you’ve found a winner. Except … we were fine with running that as a test, but moving it to production doesn’t make sense given RANDOM_INITIATIVE. Interesting learning, though!
How can you avoid it?
Optimize for one metric
Ideally not only one metric per test, but one metric across the testing program. Make sure it’s a metric that matters – revenue, subscriptions, signups.
It’s not ideal, but if necessary your metric can come with guardrails, like “increase conversions without negatively impacting email signups.” Which leads to …
Decide how you’ll act on results before launch
Sanity check: is the team prepared to put any winning variation into production? Is it feasible to do so? Will anyone block you? (Can you take them to lunch today?)
What will you do if there’s no conclusive winner? Stay with the baseline experience, or pick a favorite and go with it? (Who picks?)
If you get a winner based on conversions, but observe that bounce rate is also up, is it still a winner?
Have an open discussion about these questions, using a future experiment as an example. Get all the fears and misgivings out. Most likely, you only need to answer these questions once, as a team.
Write down what you decide and post it prominently. You can be proud of creating a culture that is resistant to analysis paralysis ᕦ(ò_óˇ)ᕤ