B2B Marketing Blog

Get Exceptionally Valuable Results From A/B Testing With Advanced Measurement

By Jordan Con
May 25, 2016

A/B testing has become a staple in the data-driven marketer’s diet. After all, decisions based on A/B experiments are objective and data-centric, rather than based on feelings, which can vary from day-to-day and person-to-person.

Here at Bizible, we love running A/B experiments, from testing landing page copy to button colors to blog design and more. In this post, we’ll discuss how we use A/B tests and measure them using down-funnel metrics to make sure that we’re optimizing for the right goals.

A/B Testing With Leads And Opportunities

In standard A/B testing, experiment results are usually decided by metrics like engagement (clicks) or clicks on a specific item such as a form. For example, marketers can test two different featured images on a landing page and measure whether people are more likely to engage anywhere on the page (click on anything) based on the different images. Or more specifically, marketers can test two buttons that say either “Download Now” or “Download Today” and measure the specific button clicks. Whichever version gets more clicks is determined to be the winner. (Read more about A/B testing basics here.)

But when marketers optimize for these metrics, are they really optimizing for the right outcome?

According to the pipeline marketing strategy, marketers should be optimizing for the entire funnel, not just the top. Rather than focusing on just maximizing clicks, marketers want to consider how to generate the most valuable clicks, which is measured by qualified leads, how those leads convert to opportunities, and eventually all the way down to new customers and revenue.

So instead of evaluating an A/B test based on how many people click on experiment A versus experiment B, you want to know who is clicking on experiment A and B, as well as how efficiently these people are converting down the funnel. Without this added knowledge, it would be impossible to determine if the 60 people who clicked on A are actually worth more than the 40 people who clicked on B.

Marketers can better evaluate the success of A/B tests, and therefore make better decisions, by using down-funnel metrics, like qualified leads and opportunities, as the goal rather than clicks.


Reporting On A/B Test Results With Leads

We’ll start by explaining what our reporting looks like with our A/B testing solutions. (We use Optimizely for A/B tests. It’s an awesome tool with great functionality.) We create the experiment and then set the goal. With the functionality that the A/B testing solution allows, the goal that’s deepest in the funnel is clicks on a specific item. For landing page optimization experiments, we usually use clicks on a form submission button.

Form submissions may seem like a pretty good metric to use, but they are actually quite misleading if you think that they’re the same as (or even close to) leads.

For example, in Q1 2016, The Definitive Guide to Pipeline Marketing ebook data looks like this:

Form submissions 7.5x qualified leads
Contacts created 5x qualified leads

Because qualified leads are a small percentage of total form submissions, we can’t say with confidence that generating form submissions is an accurate representation of how we generate qualified leads.

To report on our A/B test results with qualified leads, we turn to Bizible’s attribution integration with Optimizely.

When we run our experiment through Optimizely, the data is automatically integrated with our attribution solution. Therefore, the experiment data can be connected to the marketing metrics that we know and love: leads, opportunities, customers, and revenue.

Using an A/B Test Lead Report, we can see how the experiments and experiment variations are contributing to the leads. And because this data is in our attribution solution, our salesforce lead qualification filters are applied so that the report shows the data that we care about.

We also have to be sure to use the appropriate time frame. By matching the lead create time frame with the experiment time frame, we can omit pre-existing leads that happened to resubmit forms during the experiment.

The data in this report shows how many new leads engaged with each experiment (total), as well as with each variation. Because all of the information is pulled straight from the A/B testing software, it’s important to use a descriptive naming convention that you’ll recognize when you are analyzing the data in your attribution solution.

Here’s an example of the data for one of our tests (both using our A/B testing software and with our attribution solution):

A/B Testing Software Report:

Version A (Original) 70.54% engagement rate
Version B 72.47% engagement rate (+1.3%)

Attribution Solution Report:

Version A (Original) 91 leads
Version B 131 leads

If we had only looked at the engagement data that comes with our A/B testing software, it would have looked like the experiment didn’t make a difference. Both versions had approximately the same engagement rate (clicks per pageview).

However, after looking at our attribution data, we see that Version B had a much more positive impact on the metrics that we care about. Even though approximately the same amount of people engaged with each version, Version B’s engagement was a lot more meaningful, and we were able to declare Version B the winner.


Reporting On A/B Test Results With Opportunities

Better yet, is reporting on A/B test results with sales opportunities. Since they’re even closer to revenue in the funnel, they’re a better indicator of how the A/B test impacts what we really care about.

The downside to this, and why we often use qualified leads as our experiment goal, is that it takes a certain amount of volume to reach statistical significance in your test results. If you generate a lot of sales opportunities, it’s best to use them as the test goal. For smaller organizations, or organizations with fewer but larger deals, it can often take months for the test to produce actionable results based off of opportunities.

Reporting with opportunities is a fairly similar process as reporting with leads. Through your attribution solution, you can filter the number of opportunities generated that saw each version of your test. When you see that a version is performing better (with statistical significance) in terms of generating opportunities, you can declare that version the winner.

Additionally, potential revenue is attached to the opportunity. While potential revenue is clearly not the same as actual revenue, it’s a good indicator of deal size. Because you now have deal size data, you can see how your A/B test influences what types of accounts are in your pipeline. Perhaps Version A generates a high volume of small accounts, but Version B generates a medium volume of larger accounts. This adds another element that you must consider when evaluating the winner of an A/B experiment.


Conclusion

As you can see, adding the attribution component to A/B testing illuminates many additional insights about prospect behavior and allows marketers to evaluate the test results with more meaningful metrics.

Because A/B testing requires sufficient volume, qualified leads is a great measure of success. It’s deeper in the funnel than clicks and form submissions, but for most companies, there will be sufficient volume to get actionable results in a reasonable time frame. If you have high volume or some patience, however, sales opportunities is an even better goal.

For us, because Bizible integrates seamlessly with Optimizely, the powerful analysis is easy to do. We are confident that our analysis, and the changes that come from it, is making our marketing more efficient and creating better business outcomes, which is something all marketers should be striving for. To learn more about how attribution can help your marketing drive more business value, check out the ebook below.B2B Marketing Attribution 101  An intro guide to attribution for revenue-driven B2B marketers  Download Now

Topics: attribution

  New Call-to-action