👈 Home

The not knowing


Earlier this week we made some aggressive changes to a B2B lead generation site. Results for engagement and form page visits were encouraging.

But then the doubts crept in.

What if our gimmicky new CTA was just tricking people into clicking? What if forcing people onto a form page was just confusing them? What if we increased engagement but failed to increase form submissions - or decreased them?

With only 100 form submissions per month, we don’t have the numbers to get statistical significance. Yesterday I lamented:

If these changes have driven conversions down to 75 a month, that is terrible – and we won’t know it. If they raised conversions to 120 a month, that is HUGE – and we won’t know it.

Notice all the despair and hopelessness wrapped up in the phrase “won’t know.” We’re data people, who use data to do data-centric stuff. It sucks not to know.

Except … we never really know. Even if we measured an increase in form fills at statistical significance, all we would know is “there’s only about a 1 in 20 chance that this result is total bullshit, so that’s cool.”

To make decisions based on our imperfect data, and to free ourselves of doubt, we need a bit of faith. And a process.

I’ll propose the following guidelines for deciding whether a microconversion win (like increased engagement or form page visits) is the real deal:

The second criteria helps us look with suspicion on tricks like a CTA that says “Click Here for $1,000 Cash” - if we see an 85% increase in clicks, but a 0.17% increase in the next step … something’s fishy.

The first criteria is probably a crime against statistics, but I sleep fine at night having committed it. Tomorrow we’ll look closer at the risks that come with such reckless statistical methods, so you can make your peace and sleep at night, too.


    © 2024 Brian David Hall