In your best moments, you’re optimizing furiously, testing a broad range of feasible alternatives to find the highest performing variation. But there’s a time and place for simple “on/off” tests to measure the impact of a single change – especially a big change.
Site redesigns gone wrong
Whether it’s the talented sales team at a design agency, a critical mass of internal team “exposure hours” to your existing site, or a competitor’s sleek new look, site redesigns happen. They probably shouldn’t, but they do.
Examples abound. You won’t read about them in the news, but every year thousands of smaller organizations update their site or app, only to see their numbers tank. If I had a nickel for every time I’ve heard “We did a site refresh a few months ago and conversions have been down” … I’d have a giant pile of sad, sad nickels.
Is it worth running a test on a single lousy variation?
As discussed before, if you’re only testing one variation, you’re not optimizing; you’re validating an idea. Spending all your time and energy validating ideas is ultimately a suboptimal strategy – even if your ideas are great. (I’m sure they are.)
But if you’re being pushed to roll out a significant change to your site, based on something other than the results of a winning experiment, you should absolutely test it first.
This can happen for all kinds of reasons. The new director of UX has an idea about how to give the site an updated look and feel. The CEO wants to add a new widget to the home page. Your competitor is using some hot new functionality that everyone wants to try out.
By all means, try it out. But run it as a test. That’ll let you measure the impact on conversions and revenue, effectively making the new elements or design “pay rent” on your site. And it’ll give you a kill switch in case things go horribly awry.
Compare the tragic stories of Digg and Marks and Spencer with the Instagram horizontal swipe debacle of 2018. While Marks and Spencer labored for months to recover, and Digg never quite recovered, Instagram was able to roll back the failed feature in a matter of minutes.
In most of these cases, it’s necessary to do the work of fully implementing the update before you set up the test. Be careful! This sunk cost will bias your team toward keeping the new feature.
To combat this tendency, be sure to agree up front on how you’ll act on the test results. (Just like you would with any other test.)
For the typical “de-risk it” test, this means “if we don’t see a significant decline in conversions, we’ll permanently implement the variation.” But make sure everyone’s in agreement that if you do see a sharp decline, you’ll kill the new design. No matter how sleek.
Have you ever run any disastrous redesigns? Please hit REPLY and tell me all about it. Maybe your story can save others from heartbreak 😁