👈 Home

Open letter to every A/B testing stats engine


Hey Optimizely / Google Optimize / VWO / whoever you are,

Thanks so much for building a stats engine to help us make decisions on experiments.

I love how you implement Bayesian inference to get results faster, or control for false discovery rate, or whatever it is you’re doing inside that black box.

One tiny gripe: what’s up with declaring a winner < 24hrs into the test?

Screenshot of A/B test results with a spurious winner declared

You are causing people to freak out.

If you’d like to provide value where you’re currently sowing fear and chaos, you could temporarily replace “Probability to be best” with “Probability something’s broken.”

If a goal has zero conversions after the test has been live for a few hours, the goal might be broken! Let us know.

If a variation has zero conversions, it might be broken! Let us know.

That’s what we’re looking for immediately after launch, anyway.

Telling us you’ve already found a winner is just playing with our emotions. We want to trust you. We want to build a future together.

But you’re making it hard. Please, no more lies.


    © 2024 Brian David Hall