How to carry out an A/B test
Once you’ve decided which part of your marketing approach to split-test (social media, email, etc.), it’s time to actually carry out the test.
Decide which metric you want to test
If you are in B2C, you might want to boost your sales, average order value, the conversion rate, or maybe the number of returning customers if you offer a subscription-based service.
B2B companies might focus more on the number of leads generated because once other companies show their interest in your solution, purchases and subscriptions will follow — if the lead is nurtured correctly. It’s unlikely that a company representative browsing your website will hit “Choose a plan” straight away.
Brainstorm ideas and formulate your hypothesis
Once you’ve chosen the metric you want to test, the next important step is planning what to do. Will you alter a heading, subject line, design element, or CTA?
Review where your visitors or leads fall off and brainstorm ideas on why that might be and which change can turn things around. After you have a list of suggestions, choose one that has the best potential and make a specific prediction. For example, “Moving a CTA one block higher will increase the click rate by 10%.” That is both a hypothesis and how you are going to measure success.
Implement your hypothesis and create two variations
Technically, it’s just one variation. You already have another one in place — the underperforming one. Your existing one is called “the control,” and the one you think should outperform it is called “the challenger.” You are going to stack them up one against the other and see what happens.
Run the A/B test
Roll out your “challenger” version and track how it performs compared to “control.” There are two things to keep in mind here: the audience size and the split test duration.
You have to run the test on large enough audience segments to get statistically significant results, and to exclude the possibility of a false positive or a false negative. Anything can happen on a small sample size that won’t maintain its statistical significance level on a larger sample size. So, you can split your audience in half: show one half the control variation and the other half the challenger variation.
The test also has to go on long enough. A couple of days is just the same as a small sample size, only you’ll have a short time frame instead of a small audience. The potential for an inconclusive result remains. Aim for at least two weeks when running a split test and consider external factors (such as holidays) that can skew results during that time frame.