Evidence backed hypothesis fail
September 2019
Here’s a True Story I heard the other night at a CRO nerdout 🤓
A skilled and conscientious growth team did an extensive round of user testing on their signup funnel.
They found the site scored low on trust. Questions like “Why do you need this info?” and “What are you going to do with my data?” were not clearly addressed, and user testing highlighted the fact.
So they developed an experiment to test their now-research-backed hypothesis that clarifying the privacy policy would reassure visitors and increase the signup conversion rate.
Here’s what happened: 📉
The new, improved, trustworthier experience saw far more visitors nope out of the signup funnel.
The team’s best guess as to why this happened sounds reasonable to me:
Visitors who actually complete the funnel do not have data usage policies on their minds. They’re just completing a task. Calling attention to trust and data privacy merely slows them down and, in some cases, causes them to rethink what they’re doing.
So what’s it all mean? Lots of stuff:
- User testing results should always be validated; they’re a small sample, and they’re not your users
- Generic CRO frameworks that urge you to highlight trust, urgency, etc. are just that - generic
- Using research backed hypotheses doesn’t guarantee you better results
Hit reply if you’ve got a good story about experiment results contradicting preliminary research. I’d love to know how pervasive this problem is; together we can warn the world.