For B2B marketers who control or have input into at least some part of their website, it should always be a goal to continuously improve it. To do that, you have to be running experiments or tests -- optimizations large and small to improve the user experience and direct visitors down your intended prospect path.
Sometimes you hit a wall and run out of ideas. How much can you really tweak, right? So here are six tests that we either have run or are currently running, that you should try, too, on your B2B website or blog:
Simplified Landing Pages
Dynamic Content Offers
CTA copy on high volume, high impact pages
We will also discuss how to measure your experiments so that you optimize for the outcome of marketing (and make your boss happy), not just top of the funnel clicks (that make you feel good).
For B2B marketers who are focused on converting visitors and generating demand, it can be tempting to want to put multiple offers on every page. Giving people logical paths to go down is effective, no doubt. If someone was interested enough to click on and read an article about account-based marketing engagement metrics, it makes sense that they might be interested in downloading an ebook on account-based marketing strategy. Or, if a visitor navigated to the product page, they might be interested in watching a demo video of how the product works.
Wherever a person is on our website, we want to continue to offer relevant content on our website. For one, we hope it’s valuable and helpful for our reader. And two, it increases engagement with our company.
This is the upside of offers.
On the other end of the spectrum, offers can get out of control. As consumers of content ourselves, we all know how annoying cluttered websites can be. How many offers “out of control” is varies from page to page, which is the experiment you should be running on your website.
For most of our blog posts, we have a default sidebar that has three different offers. One offer is to subscribe to the blog -- the relevancy of that is pretty obvious. The next is to request a demo -- at best, it’s a quick conversion, and at worst, it’s a marginally distracting banner ad. The final offer is a download form for a 101-level ebook. The great thing about 101 content is that it’s extremely versatile. When it’s next to top-of-the-funnel content, it’s a logical next step. When it’s next to more advanced content, it’s an easy out for people who find themselves a bit in over their heads (“If you don’t quite understand everything that’s going on here, check this out instead”).
These three offers are our blog default.
But they’re not always optimal. For some content, we’d prefer the reader to have as few distractions as possible. We want to optimize for the best reading experience, which means text on a white page. So we’re testing removing one or two of the offers to clean up the reading experience. So far, we’ve found that the clean version often outperforms the default layout, increasing conversion rates on some blog posts by 33% even though there are fewer ways to convert.
It doesn’t always work, but it’s worth a test.
Simplified Landing Pages
Along the same lines, experiment with simplifying your landing pages. Other than your desired action, how many ways can people navigate away from your page?
One experiment that has helped us improve conversions was to remove the navigation menu from the top of our page. Now, the header just has our logo, which is linked to our homepage. By removing the navigation menu, we’ve essentially limited visitors’ actions to three things: 1) filling out the intended form (desired action), 2) bouncing - either closing the tab/window or typing in a new URL, and 3) clicking on our logo and going to the homepage.
That’s a lot fewer decisions than our standard website layout, where they can potentially click on dozens of links.
Removing the navigation menu worked for us, but it’s something that you should experiment with and monitor closely. It’s not just us, though. Removing the navigation menu also worked for another company, improving their conversion rates by 100%.
Dynamic Content Offers
We consider ourselves to be a pretty content-driven marketing team. We spend a lot of time creating content, as well as ways to put the right content in front of the right people at the right time. A big portion of that is through dynamic content offers, which means experimenting with offering different ebooks or research reports.
We actually do this two different ways. The first is a simple A/B test where we create two or more variations of an offer (usually different ebooks) and then randomly serve them to visitors. When one proves to be more effective than the others in a statistically significant way, we can then make the switch to make the more successful offer the permanent one.
The second way we dynamically serve content offers is rule-based. If the visitor fits in category A, serve them Offer 1. If the visitor fits in category B, serve them Offer 2. The categories could be lead stage, historical actions, first-time visitor, etc. In theory, this should improve performance, but it’s not always so clear in practice. That’s why it’s worth running an experiment.
CTA Copy on High Volume, High Impact Pages
It’s a fairly common CRO (conversion rate optimization) best practice to test CTA copy, whether it’s “Read More,” “Learn More,” “Download,” “Download Now,” “Download Today,” etc.
Yes, you absolutely should be testing for the best CTA copy, but you should also be doing your testing on pages that get a lot of volume and have a lot of potential. You can try this test on the landing page for your best ebook, or one of your most trafficked non-homepage page (if the homepage is off limits).
The reason is that, for most B2B companies, where your traffic is in the thousands of unique visitors per month (as opposed to the millions), it can take a really long time to get results for your test. This is particularly true for something as small as a CTA copy test. You want to make sure that you test it on something that gets a lot of views and where the CTA is the highest priority.
We often use www.bizible.com/pipeline-marketing as our testing ground. It’s a high volume page and everything leads to the form CTA at the bottom of the page. Therefore, we can be sure that the test won’t take too long and that people who visit the page see the test.
Personalization on your website increases its effectiveness because it allows you to deliver more relevant messaging. For example, if you know a visitor works at a company in the high tech industry, you want to use messaging that’s tailored for high tech companies. That means using the right terminology, showing the right customer logos, etc.
However, personalization can be expensive, so you want to make sure that the benefits are worth the cost.
In its simplest form, B2B marketers can begin to experiment with message personalization by offering versions of essentially the same page, altered by a specific segment. For example, in our navigation, we have a “Why Bizible” page that has a dropdown menu for different verticals. Each page answers the question of why B2B marketers need Bizible marketing attribution, but each page also has specific information and data for their industry.
If the initial test is successful, the next step would be to use a personalization tool to automatically figure out which industry the visitor is a part of and then serve them the relevant page.
Video content has the potential to be much more engaging than written content. Depending on the type of video, it offers the ability to give a face to the brand or demonstrate how a product works via screen recording and voice over.
And while it can certainly help, good video content doesn’t require tons of equipment. Most smartphones now record high definition video and clear audio. On the other hand, we’ve also worked with agencies to create videos. It’s a good way to produce high quality video content without much in-house manpower or video expertise.
Here are three ways we are experimenting with video content on our website.
Measuring the experiments
So now you have six more experiments to run. Creating and running these experiments is one thing, but they’re only valuable if you’re able to actually determine which version is better. When there are dozens of possible metrics to determine success, which one do you use?
First, you need to decide which metrics (or single metric) really matter. What’s the intent of your website/blog/etc.?
For most B2B companies, the goal is to generate demand -- encourage those who are good fits for your product to move down the funnel. Therefore, the success metric shouldn’t be clicks or even simple form fills -- these often have little to no correlation with downstream revenue. Instead, you should be measuring the number of qualified leads, opportunities, and revenue driven by each version of the experiment. If Version A generates 100 clicks and $10,000 in revenue and Version B generates 50 clicks, but $20,000 in revenue, you would definitely want to move forward with Version B (assuming statistical significance). Version B has proven to be more valuable for your business.
Alternatively, if you’re running an account-based marketing program, you’ll want to measure touchpoint engagement from your target accounts instead of lead volume. Are your test versions proving to be more relevant to the accounts and contacts that you care most about? Even if you’re doing ABM, you’ll also want to measure opportunities and revenue.
Most A/B testing platforms will measure experiments based on clicks. Some will allow you to dictate which specific element of the page you want to track -- clicks on the form submission button, for example. And recently, some have even moved into the lead tracking territory through conversion pixels and integrations with marketing automation platforms.
But to judge experiments based on down-funnel metrics, including revenue, B2B marketers need to use marketing attribution. When measuring A/B tests, attribution bridges the data gap from experiment platform or marketing automation to the CRM, which connects these top and middle funnel initiatives with down-funnel opportunity and revenue data.
Bizible marketing attribution integrates directly with experiment platforms, such as Optimizely, as well as marketing automation platforms. This allows marketers to see how their experiments are performing based on downstream metrics like opportunities, pipeline, and revenue.
To make smart decisions, you need smart measurement and reporting. All the tests in the world won’t make a difference if you’re evaluating them based on the wrong metrics. Good luck and happy experimenting!