No more interesting results, please
July 2019
If your A/B test reporting looks like this, please stop:
We observed a slight lift in Average Order Value, not at significance. Conversions were slightly down, but Visits to Cart were up. Clicks on the new âYour Perfect Shoeâ module were higher on mobile than on desktop. This is an interesting result and we will explore it in future tests.
Iâve written reports like this myself, but Iâm not going to do it anymore. Hereâs what Iâll say instead:
The test yielded no actionable results; the team will conduct a root cause analysis in order to identify why this happened and prevent it from happening again.
The problem with interesting results
Experimentation is expensive. Someoneâs paying a lot of money in order to access your precious brain, and its ability to make magical measurable things happen on websites. Theyâre almost certainly expecting more out of that investment than âHmm yeah thatâs weird lol.â
Even if youâre working pro bono, thereâs opportunity cost to consider. You can only run so many tests per quarter. How many test slots are you willing to yield to inconclusive ÂŻ_(â︿â)_/ÂŻ tests?
Action over interest
A successful test leads to a new perspective. A recommendation, backed by data. An assertion about what the company should do next.
If your experiment results stop short of this, youâll get limited attention from decision makers in the organization. (And youâll be on shaky ground when budgets are being set.)
If youâre dedicated to running experiments that yield this level of insight, youâll probably change some things about how you set up and prioritize tests. Youâll generate more informative data, make recommendations based on it, and stand behind them. (People with budgets will notice.)
Inconclusive tests can yield actionable results
So how do you avoid interesting results? Does that mean that every test has to have a conclusive winner?
Not exactly. If you test 7 very different experiences on your Shopping Cart page, and see no measurable impact on conversions from any of them, you have an actionable result.
The action youâll take is âstop testing on the Cart.â And maybe âschedule podcast appearance to brag about our super-optimized Cart.â
But you canât draw this conclusion or confidently take this action if you only tested a single change. Youâre stuck noting some interesting findings and trudging forward to the next test, hoping something better happens.
Have a lovely weekend. May your life be interesting and your data be actionable. Next week weâll look into test prioritization - when it matters, how not to do it, and how it both reflects and shapes your team culture. âŽď¸