A
A/A Test
An experiment with no changes to the user interface, used to validate testing platform installation & metrics, and to measure noise in conversion data. (Read more here.)
A/B Test
An experiment that shows different experiences to different visitors in order to optimize for a particular conversion, or to de-risk going live with a new experience, or possibly just to learn stuff.
Above the Fold
Content that is visible on page load, without scrolling, to most visitors.
Analysis Paralysis
The tragic state of affairs when you’ve run an experiment, gathered data, and can’t make a decision based on it. (Read more here.)
B
Bounce Rate
The percentage of visitors who leave your site without navigating past the page they landed on.
C
Call To Action
A step we encourage visitors to take, or a button or link representing that step.
Conversion
A valuable action taken by a website visitor – usually purchases, signups, form submissions. You can say “conversion” when referring to clicks, engagements, and other interactions, but optimizing for them doesn’t necessarily make the business more money. (Read more here.)
Conversion Rate
A fraction, or percentage representing conversions per session (or conversions per visitor).
Conversion Rate Optimization
The practice of systematically changing your website with the hope/belief that what you’re doing will increase conversions.
CVR
Abbreviation for Conversion Rate.
CTA
Abbreviation for Call To Action.
E
Engagement
Literally any interaction on your website. The opposite of bounce. Feels good, but does not make you money 🙂
F
Funnel
A possibly useful mental model that pictures visitors to your site taking a series of sequential steps before converting. An ecommerce checkout flow is a good example of a funnel. Your site may not really have one.
H
Heat Maps
Visual representations of where visitors click on a page. (Read more here.)
Hero Section
The topmost section of a web page, excluding navigation and promotional banners. It’s a terrible place for a carousel.
Hypothesis
At best, a rigorous statement of experiment parameters and success criteria. At worst, a psuedoscientific gibberish way of saying “I wanna test this”. You may not need one.
I
Impact on Revenue
A combination of math and hope that seeks to communicate how much extra money a business will make as a result of an experiment. (Read more here.)
Inconclusive
The result of a test that fails to find any clear winners or losers. The worst result imaginable, but if you’ve reached it after testing sufficiently different experiences, it tells you that the page / element / type of change you’re testing just doesn’t matter.
L
Lift
The percent increase achieved by an A/B test variation. If the control experience has a 3% conversion rate, and a variation has a 3.6% conversion rate, it has 20% lift.
M
MDE
Abbreviation for Minimum Detectable Effect.
Microconversion
An action on your website that is not tied directly to revenue – newsletter signups, page visits, clicks.
Minimum Detectable Effect
A measure of test sensitivity. Based on traffic, conversions, desired statistical significance, and test duration, this number tells you how much lift you’ll be able to detect with statistical methods.
N
Noise
A simple, non-statistician’s term to refer to the fact that traffic, conversion rates, and pretty much everything else about website data tends to vary day to day, hour to hour. Noise is what causes identical variations in A/A tests to show different conversion rates. (Read more here.)
P
Painted Door Test
An experiment that measures interest in a feature not yet built, or resource not yet created. Visitors are invited to take the first step toward using the nonexistent resource, and their clicks are used to validate or invalidate demand.
The Peeking Problem
The practice of looking at A/B test results before the test has completed, potentially calling a winner due do noise. (Read more here.)
Personalization
Serving different experiences to different visitors based on their behavior, demographics, or some other arbitrary set of rules. (Read more here.)
S
Session Recordings
Video playback of actual site visitors’ clicks, scrolls, and navigation. (Read more here.)
Statistical Signficance
The odds that a given test’s results are not due to random noise. 95% seems to be a magic number.
Stopping Conditions
Rules that dictate when an experiment should be concluded, so as to avoid the peeking problem. Based on target sample size, minimum test duration, and level of statistical significance.
U
User Testing
Qualitative research method that involves assigning actual humans (who hopefully fall within your target customer base) to complete tasks on your website, while sharing their thought process.
V
Variation
A unique experience in an A/B test.
Visitor
An actual human using your website, or a fictional human reconstructed by your analytics platform based on click and page view data. (Read more here.)