The not knowing
August 2019
Earlier this week we made some aggressive changes to a B2B lead generation site. Results for engagement and form page visits were encouraging.
But then the doubts crept in.
What if our gimmicky new CTA was just tricking people into clicking? What if forcing people onto a form page was just confusing them? What if we increased engagement but failed to increase form submissions - or decreased them?
With only 100 form submissions per month, we donât have the numbers to get statistical significance. Yesterday I lamented:
If these changes have driven conversions down to 75 a month, that is terrible â and we wonât know it. If they raised conversions to 120 a month, that is HUGE â and we wonât know it.
Notice all the despair and hopelessness wrapped up in the phrase âwonât know.â Weâre data people, who use data to do data-centric stuff. It sucks not to know.
Except ⌠we never really know. Even if we measured an increase in form fills at statistical significance, all we would know is âthereâs only about a 1 in 20 chance that this result is total bullshit, so thatâs cool.â
To make decisions based on our imperfect data, and to free ourselves of doubt, we need a bit of faith. And a process.
Iâll propose the following guidelines for deciding whether a microconversion win (like increased engagement or form page visits) is the real deal:
- The conversion rate for the next step also goes up (though maybe not at statistical significance)
- The increase in the microconversion rate is not wildly greater than the increase in the next step conversion rate
The second criteria helps us look with suspicion on tricks like a CTA that says âClick Here for $1,000 Cashâ - if we see an 85% increase in clicks, but a 0.17% increase in the next step ⌠somethingâs fishy.
The first criteria is probably a crime against statistics, but I sleep fine at night having committed it. Tomorrow weâll look closer at the risks that come with such reckless statistical methods, so you can make your peace and sleep at night, too.