A/B testing (AKA split testing) is a highly effective way of testing variables against one another to find out which version of something is more effective. For the purposes of email, it’s generally about testing two versions of an email to find out which is better at getting the desired result.
How A/B Testing Works
Email A/B testing is simple: you create two versions of an email that differ in just one respect. You then choose at random a small percentage of your total email list to act as test subjects, and send one version to each half of the test pool. Once you’ve decided which version is the most successful, you send that version out to your full subscriber list.
Or if you prefer, you can use your entire subscriber list as the testing pool. This option will produce more accurate results, but it does mean it takes more time to gather usable data on your subscriber base’s preferences.
For instance, you craft two different subject lines, and send half your recipients a version with subject line A, and the other half the version with subject line B. Whichever version generates the best result—whether it’s email opens, clicks, or purchases—is the winner.
Over time you can use the information you generate in A/B testing to figure out what kinds of content your subscribers like, and what kinds of emails they respond to.
The Key to Successful A/B Testing
A/B testing is easy to learn, but hard to master. There are lots of variables that you can test, and a number of different goals and results to aim for. So, while the principle behind A/B testing is simple, it’s easy for the testing to get complicated if you’re not methodical about what you’re doing.
The most important rule in A/B testing is to test only one thing at a time. Don’t try and test subject lines, email content, and incentives all in one single test—you’ll only muddy the waters and make it hard to determine what single factor is driving the responses you get. A/B testing is so called because you’re testing two versions of a single variable, and that simplicity is the key to its success.
What Kinds of Parameters can You Test?
When it comes to email A/B testing there’s a fairly extensive list of possible test variables. Some examples include:
- Email subject lines: does a generic subject line work, or is it more important to be specific about what’s in the email?
- Personalised content: do people respond better if you use their name in the subject line or salutation? (Remember—test only one at a time!)
- Email content: here you can test a range of different elements, from small details such as section titles or paragraph length, to images, calls to action, and the kind of language you use.
- Formatting: columns versus full-width paragraphs, the location of the images you use, fonts and use of white space, colour use—all of these things can influence people’s reactions to your emails, and are worth testing.
- Incentives: are your subscribers interested in information, or direct benefits such as discounts or freebies?
Decide Your Test Parameters
Once you have your first test versions ready to go, your next step is to select test email recipients, and decide how you’ll determine who “wins” the test.
If you are using a test group, the group should be a small percentage of your total recipient list. Depending on your total subscriber number, that could be anywhere from 5% to 30%, bearing in mind that you need to test a large enough group of people that you’ll get a statistically valid result.
Note that while you’ll usually get more accurate results by using your entire subscriber list for testing, there are instances where it’s better to use a small test group before sending out an email.
For instance, if you’re trying out something very different from your normal emails—and there’s a chance the response will be negative—then testing is a good idea. Or, if you’re testing copy for a special offer or sale where you hope to get a big response, it’s worth pre-testing to make sure you send the best possible version out to your subscriber list.
For any given test, it’s important to make sure you’re clear on what you’re testing, and how you’ll define which version wins the test. Often, the definition of “winner” will depend on what you’re testing. Generally, you’ll be testing metrics such as open rate, click-through rate, and conversion rate.
For instance, if you’re testing two versions of a subject line, the winner might be the email version that more people open. If you’re testing calls to action, it might be click-through rates you’ll look at to determine the winner.
The conversion rate of people who click-through to your website is also important in some cases. This is an important metric in its own right of course, but when you’re A/B testing email content it is another useful value to look at, because it can help you determine whether your email content meshes well with your website content.
Consistency between email content and website content is important because your website should fulfil whatever expectations your email has set up. If your email promises a special offer or other incentive, and the reader clicks through to a web page that doesn’t mention the offer, you may find that an increased click-through rate doesn’t translate into more sales.
Ongoing A/B Testing is Important for Success
A/B testing is a trial-and-error kind of process where you learn over time what your subscribers like and don’t like, and what kinds of email content works best for to achieve your goals. So the most important aspect of testing is to keep it going. Don’t just do it once or twice and then stop. A/B testing works best if you consider it an ongoing and integral part of your email strategy, where you’re continually refining and improving your emails to make them as successful as possible.