What is A/B Testing? How It Helps Sales Outreach
Find out what A/B testing is and how you can use it to improve your outreach
A/B testing is a simple but powerful tactic for improving the quality and performance of your sales emails. A/B tests remove the guesswork from your approach as you’ll drive sales decisions based on real data and not hypotheses.
In sales, small changes may trigger a huge impact. Minor tweaks to the subject line, call to action, and even the timing of your messages could be the difference between zero and hero.
Here’s a closer look at what A/B testing is and how you can use it for sales outreach.
What is A/B testing?
At its most basic, A/B testing is a way to compare two versions of something to figure out which yields better results.
The version we know today was first used in the 1920s by the statistician Ronald Fisher, who applied it in agricultural experiments to test different fertilizers. Although he wasn’t the first to run A/B testing, he was the person who figured out the basic principles and put a ‘name’ to the process.
In fact, James Lind’s 18th-century experiments to find a treatment for scurvy are often considered a precursor of Fisher’s.
A/B testing in its current form emerged in the 1990s. While the math behind A/B testing remained largely the same, what changed was the sheer scale, complexity, and tools used.
Google famously ran a huge A/B test to determine which shade of blue to use for ad links. They tried 41 different shades (just because it’s called “A/B” doesn’t mean you’re limited to two options) and went with the one that saw greater likelihood of clicks.
As they say, the rest is history.
Now, A/B testing is used for nearly everything in the digital world: emails, product features, website design, and more. You can even run A/B tests to evaluate GPT options or gain insight into fine-tuning smaller LLM models.
Example of A/B testing
A/B testing is a simple process, especially if you stick with no more than 2 or 3 variations. Depending on what you’re testing, the complexity of A/B tests grows as you introduce large numbers of versions. Google’s “41 shades of blue” test wouldn’t have been too challenging since they’re simply measuring clicks versus no-clicks. It’d be another story if they were testing types of hero images that included various amounts of text, images, and interactive elements.
Here’s an example of an A/B test in SDR outreach:
Version A | Version B |
Hey Sarah! Wanted to know if you’re open to chat about sales strategies? Metal reached 1.8x more revenue in 30 days with us by running 100% autonomous sales campaigns. Book a call here: [link]. Thanks –YZ | Hey Sarah! Wanted to know if you’re open to chat about sales strategies? Metal reached 1.8x more revenue in 30 days with us by running 100% autonomous sales campaigns. Should we set up some time to chat? Thanks –YZ |
A/B tests yield more reliable results when they change very few, if not one, elements. That’s because by testing 1-2 elements at a time, you can clearly identify what is (and isn’t) working).
In this case, version A and version B are nearly identical except for two minor changes. The only difference is the call to action. A uses an imperative call-to-action with a link while B asks a question and omits a link.
The version that sees more responses and calls booked would be the winner.
Why is A/B testing important for sales outreach?
A/B testing helps you understand what resonates with your audience. Instead of guessing which emails will get the best responses, A/B tests allow you to collect data-driven insights so you can create more effective emails in the future.
Consistent A/B testing also lets you optimize messaging over time. By regularly testing elements like subject lines, call to actions, and time of delivery, you can fine-tune your approach and change your sales copy.
Although running tests require an upfront investment of time and resources, it pays off in the long run. After all, the goal of A/B tests is to learn what your audience likes, which means if you’ve designed them well, you should end up with a lower cost per lead.
What can you A/B test in sales outreach?
The beauty of A/B testing is that you can run experiments on almost anything, so long as there are at least two versions. In the case of sales outreach, there are a lot of things you can A/B test if you want to improve your campaign results:
- Subject lines
- Email length
- Wording of your value proposition
- Type of content
- Call to action
- Day of the week
- Time of day
- Multimedia elements
AiSDR makes it easy to set up campaigns to run multiple A/B tests. A while ago, our CEO used AiSDR to run his own A/B test to find out the optimal length of subject lines.
You can even configure AiSDR to include other types of media in emails such as AI videos and AI memes to see which form of content your audience connects with best.
How to set up an A/B test for sales outreach
Setting up an A/B test is a straightforward process.
Decide what you want to test
You’ll need to answer two questions here: 1) What’s the object you’re testing?; 2) What are you changing?
Let’s imagine you decide to test your sales emails. This answers the first question.
Next, you need to decide which element of your sales emails you want to test. This could be the subject line, content, language style (i.e. formal versus informal), call to action, email length, time, or day you send the email.
Just choose one variable to focus on at a time so you get clearer results.
Split your audience into equal groups
Ideally, you should divide possible readers of your email into 2-3 groups of roughly equal size. Nothing’s stopping you from having more than 3 (Google had 41!), but you need an audience whose sample size is large enough to provide statistically relevant results.
AiSDR helps in this regard by supplying you with the contact data of 700M+ leads from Apollo and ZoomInfo.
Then start sending. Group A receives email A, Group B receives email B, and (if necessary) Group C receives email C.
Remember, aside from the element you’re testing, you should keep all other factors the same. This means if you’re testing email length, you need to keep day, time, subject line, and CTA identical. Or if you’re experimenting with days, then you shouldn’t change the email’s copy and time you send.
Track the results of each version
You’ll need to monitor the performance metrics of every version of the email.
Common metrics you should watch are:
- Open rate
- Reply rate
- Positive reply rate
- Click-through rate
- Demo booked rate
Once you’ve gathered enough data, you can use the results to refine your future emails (and repeat the process so that you’re continuously improving).
If one version significantly outperforms the others, this gives significant insight into what your audience likes, and future outreach should mirror it. If both versions generate similar results, you can use it for spintaxing emails while you run another A/B test to validate the results or try another approach.
Just keep in mind though that some elements can’t be measured by all metrics. Subject lines are a good example. It’s easy to use open rates to measure their value, but demos booked and click-through rates aren’t accurate indicators of a subject line’s quality.
Common pitfalls to avoid in A/B testing for sales
The greatest danger in A/B testing is walking into some of the pitfalls associated with it. Here are a few of the most common traps:
- Testing too many variables – When you change more than one or two elements, you won’t have a clear idea about which factor actually influenced the outcome.
- Small sample size – Tests on small groups and individuals are helpful for ideation and initial direction, but you need more people to get insights for long-term results.
- External factors – Try as you might, it’s hard to control for all factors. Changes in market conditions, seasonality, and even a person’s daily mood will influence your outreach results.
- Too many metrics – If you’re looking at too many metrics, you’re at risk of “spurious correlation” and misattributing performance. Good A/B tests use just a few metrics.
- Stopping tests too early – Many people demand near-instant results, which can lead to incorrect decisions based on insufficient data.
Best practices for A/B testing in sales
Here are some best practices you should follow if you plan on running A/B tests in sales outreach:
- Always test one variable at a time – This can’t be overstated, but the best A/B tests focus on a single change (at most two). This allows you to pinpoint the reason by swings in results.
- Run your test with a sufficient sample size – Larger audiences increase the validity of your results while reducing the impact of random fluctuations.
- Give your tests enough time to run – Analyzing results too early can cause misleading conclusions. Most tests need a week to capture a full picture of performance. If you’re A/B testing follow-ups and follow-up cadences, you might need 2-3 weeks.
- Always document your findings – There’s no point in running A/B tests if you’re not tracking what you test, their outcomes, and any insights.