It’s possible, and quite common, to do experimentation without optimization. Is that what you’re doing? If so, should you feel bad about it? Let’s find out.
Idea validation experiments
These tests are typically
- Control + 1 variation
- A single change, e.g. adding an element or updating some copy
- Hypothesis driven
You had an idea about what would improve a site (or email or ad campaign, or app onboarding flow), and you’re testing it to find out if you’re right.
If you are right, you get more conversions. You look smart. Your ideas are good, and you feel good.
Unfortunately, this type of testing yields a lot of inconclusive results. This is why you’ll hear that only 10-20% of A/B tests win. No matter how good your ideas are, they’re not all going to be statistically significant winners with a measurable impact on revenue.
These test are typically
- A/B/n tests with several variations
- Prioritized by the page or element’s impact on conversions
- Comprised of variations drawn from the range of all feasible alternatives
Where an idea validation test often starts with observing the site UX and asking “What if we did ____?” an optimization test starts with observing which pages and elements are crucial to the conversion funnel, and asking “What’s possible here?”
You’ll brainstorm as many variations as you can. (This will almost always include an option to remove elements entirely.) You’ll make sure your variations are as different from each other as possible.
Your test will take longer to set up and QA, and longer to run. You might not beat last quarter’s record number of tests launched. People will look at you funny because you decided to test Comic Sans.
But when it concludes, you’ll have a variation that beat several alternatives – not just a winner, but the winner.
Or you’ll have inconclusive results. Which sucks – but you’ll be able to say “after testing such a wide variety of experiences and observing no effect, we’re confident that further testing on this element would be pointless.”
Which one are you doing?
You can do one, or the other, or both – depending on the test, on politics, on traffic and conversions. Idea validation isn’t always bad, and optimization isn’t always feasible.
BUT … if you’re mostly running idea validation tests and want to get conclusive results more often, hit REPLY and let’s chat.