Last week we looked at reasons why “number of tests” is a bad metric for the health of your experimentation program. It’s not the only one, of course; there are tons of terrible metrics for your team and for your tests.
Anyone who optimizes anything should beware the Cobra Effect:
The British government was concerned about the number of venomous cobra snakes in Delhi. The government therefore offered a bounty for every dead cobra. Initially this was a successful strategy as large numbers of snakes were killed for the reward. Eventually, however, enterprising people began to breed cobras for the income.
Pick the wrong metric, and you’ll find yourself worse off than you started, with lower conversion rates, less money, and a pile of dead snakes.
Let’s look at a few less-than-ideal metrics and the possible consequences of optimizing for them.
For your testing program
Number of tests. As mentioned previously, you’ll likely end up launching lower-impact, single variation tests on pages, elements, and audiences that don’t matter, just to hit your target.
Complexity of tests. This one is frustratingly common, though I’ve seen no evidence that test complexity correlates with value. (I’ve seen evidence to the contrary.) You’ll end up “getting your money’s worth” out of your testing platform, exploiting every possible audience segment and activation feature. You’ll sink more hours into development and QA for a test that reaches fewer visitors.
Speed from test idea to launch. What’s your hurry? Focus on building tests quickly and you’ll skimp on preliminary research to validate the impact. You’ll introduce more bugs to site visitors, and have less trustworthy analytics. And you’ll have a bunch of questionable results to try and make sense of. I’d rather have the pile of snakes.
For your tests
Engagement / time on site / number of pages viewed. These metrics can be a leading indicator that you’re on to something … or a sign that the visitor is confused. At best, you’ll end up explaining to the person who controls your budget that you’ve spent their money on clicks.
Video views. If you change the primary CTA to “Watch Video” – guess what? You’ll get more video plays. Do more video plays mean more conversions? 🤷♂️ No idea! See above regarding the awkward conversation that awaits you.
Visits to step X in the funnel. Unless ‘X’ is the step where they make a purchase, this metric is tempting but dangerous. How do you know you’re not just sending unqualified visitors to checkout (or signup) before they’re ready?
For your team and your tests, optimizing for revenue is the holy grail; the further away you get, the deadlier the Cobra Effect. Be careful out there.