👈 Home

A/B test reporting


When a test concludes and it’s time to celebrate, what details do we need to include?

There’s conversions per variation, conversion rate per variation, percentage lift, statistical significance, confidence … per metric. The report can easily become a bloated mess.

Here are the questions every report should answer, whether it’s a slide deck or a quick Slack message:

What changes did the winning variation make? Which pages and elements were modified, and how? Bonus points if you convey this in a single image.

What audience does this test involve? What segments and devices were included?

What’s the current value of this audience? If it’s desktop visitors to the homepage, great - how much monthly revenue is driven by desktop visitors to the homepage?

What’s the expected value once we put the winner into production? Here’s where you math it out. What’s the winning variation’s conversion rate times expected monthly visitors times average conversion value? Show your work! At least in a footnote or appendix.

Is this exciting? Is the expected value a big number relative to the entire site? Sometimes people have to be told that they should care about a result, so tell them.

How sure are we? Address doubts and uncertainty. We’re never completely certain, so share the evidence that supports our faith in this win. This is the time to mention all the other variations our winner beat, along with statistical significance or probability to be best.

What do we do next? Is there a followup test in the works? People love to hear about that kind of thing (and share their suggestions).

(Optional) Any fun insights based on segmenting the audience? If we learned that the test performed poorly with visitors on tablet, or did best with international visitors, that’s worth a mention … if we’re taking action based on the observation.

That’s a lot! And it’s enough. Here’s some stuff you can leave out:

Results for 10 different story metrics. This will quickly confuse your stakeholders, or lead them into analysis paralysis.

Conjecture as to the “why”. The test doesn’t tell you why people converted. Stick to the facts; if somebody disagrees with your explanation, it’ll undermine their overall trust in your results.


Anything missing, or unnecessary here? Hit Reply and let me know what your reports look like 🤓.


I'd really like to email you.

Sign up and get a quick, skimmable update on what I'm writing / recording / building / thinking, once a week.

 
© 2022 Brian David Hall