The worst outcome in A/B testing
… is “results were inconclusive; further testing necessary.”
You built and QA’d the test. Spent the time and money.
Ran and monitored the test. Watched the metrics and kept everyone up to date.
Concluded and analyzed the test. Puzzled over the results.
And nothing has changed.
You don’t know whether this page (or element) is worth testing on, so you have to test more. You don’t know if you tested the wrong kind of change, or if you just need to test a bigger change … or if you made multiple changes that somehow cancelled each other out 🤔
You definitely don’t have a more valuable website than when you started.
Contrast this outcome with “results were inconclusive; no further testing necessary.”
You’re still stuck with the same old conversion rate, but there’s a good chance your next test will change that. Because you know where not to test.
To get the latter outcome:
- Test big changes. Certain audiences are sensitive to subtle changes in certain elements, but unless you know that’s your situation, you’re better off being a bit more aggressive
- Test more variations. A single inconclusive variation is pretty meaningless; six inconclusive variations clearly communicate “Nothing to see here”
I'd really like to email you.
Sign up and get a quick, skimmable update on what I'm writing / recording / building / thinking, once a week.