Rebecca Strally

B2B Marketing: 6 essentials for testing your teleprospecting

December 2nd, 2013

Originally published on B2B LeadBlog

For years, marketers have been testing messages on emails, websites and pay-per-click ads to determine which ones drive the most sales. At MECLABS, we’ve made this a science and have even patented a Conversion Heuristic to analyze the process.

A few months ago, we started applying this heuristic to a channel that is more than a century old – the telephone. MECLABS has its own leads generation group working with clients to help them drive more revenue through teleprospecting.

Last summer, we began applying what we learned from online testing to that channel and recently, Brian Carroll wrote about how using science increased teleprospecting sales handoffs 304%.

When I asked Craig Kasel, Program Manager, MECLABS, for a few insights into testing teleprospecting, he explained that testing can help deliver the right messaging to prospects.

“It’s a good idea to test your lead process to make sure you’re getting the appropriate messaging to the correct people,” Craig explained.

After speaking with Craig about some of the teleprospecting testing projects he’s been a part of at MECLABS to discover how B2B marketers could apply this science to their teleprospecting efforts, here’s our best advice from what we’ve learned so far.

Engage your call center

No testing will work if your callers aren’t completely on board with the idea.

To build the buy-in that produces accurate test results:

  • Involve them right away. They’ll know better than anyone else what messages (or treatments) are worth testing.

In fact, you may find some of your callers are probably already engaged in some form of informal testing.

  • Make sure they understand why they’re doing it, and why their role is so important. If they appreciate their purpose and are involved in creating the test, they’ll be more engaged and excited to help.

Build a simple structure

Determine the problem you’re trying to solve, the question that will help solve that problem, and the results that will help you answer the question.

We do this by developing a research plan, which has:

  • A primary research question – Which statement will help us reach a decision-maker faster?
  • A primary metric – Number of decision-makers reached.
  • A secondary metric – Number of sales handoffs.
  • A problem statement – Contacts were hesitant to provide the name of the decision maker.
  • Test hypothesis – We will find out which statement best encourages the contacts to give us decision-maker information.

Determine which approach to testing works best for your organization

  • Sequential tests – Callers test a single message for a time period, and then test another message for a time period.

Craig recommended sequential testing if you are going to have the same callers executing both tests.

“This is also the type of test we typically run because it’s easiest,” Craig explained. “If one of our lead generation specialists discovers a new approach they think works better, we let them try it and then measure the results.”

  • A/B split tests – Measure multiple messages simultaneously. This is better for larger call centers where you have the manpower to have separate people test separate messages in the same time frame.

Test one variable at a time

This way, when you see the result of the test, you will know precisely which variable – the general element you intend to test – influenced it. When you test multiple variables at once, you can’t isolate what caused the results.

Here’s an example of three test scripts based on the research plan above. The portions bolded are the elements of the message we tested.

Control

Hello, ___, my name is Jane and I am calling with The Widget Company.

We are currently the third-largest widget company in the nation offering competitive prices and solutions to make your job easier. When we last spoke, you told me that you use a consulting service to select widget support. Could I have your consultant’s information so the next time they choose widget support, we can be included in their evaluation?

Treatment A

Hello, ___, my name is Jane and I am calling with The Widget Company.

When we last spoke, you told me that you hired a consultant to select widget support. I wanted to let you know that we have a widget sale and I wanted to speak with your consultant to see if our sale on widgets would be a good fit for you. How can I reach them?

Treatment B

Hello, ___, my name is Jane and I am calling with The Widget Company.

When we last spoke, you told me that you work with a consultant to select widget support. Since we do not nationally advertise and may not have had the opportunity to work with your consultant, we would like to share our information with them. I would like to get your broker contact information in order to be in consideration when they next do their evaluations for you.

Validity starts with confidence

Level of confidence is a statistical term that you’ve reached a certain pre-established level of probability in a test. We want to minimize the chances that the difference in the metrics of interest between the treatments is due to random chance.

For example, a test with a 95% level of confidence has only a 5% chance that the observed difference is random chance.

Here are some of examples of validity threats that can negatively affect a test’s level of confidence.

  • Sample distortion effects – This happens when your sample of calls is too small to determine a 95% level of confidence in your testing.

A sufficient sample size depends on your existing success rate. For instance, if you’re measuring the number of sales leads, and your typical success rate is two leads for every 100 calls, then making 500 calls will give a better estimate of your true lead rate than only making 200 calls.

The lower your existing success rate is, the more people you will have to call to achieve a valid test.

Also, it is possible to work with small sample sizes, but the caveat here is your tolerance for risk when making business decisions based on less confidence in your sample pool.

  • List pollution effect – You can’t run a new test or treatment by the same list. The list has to be fresh to each test. For example, if you need 500 contacts to achieve validity, you can’t call a list of 250 people twice.
  • History effect – This happens when tests are too drawn out so influences outside the treatment are more likely to skew results. With A/B testing, you will avoid this since both tests run simultaneously. Try to compress the time span of your testing. We prefer one to two weeks.
  • Selection effect – This happens when test subjects aren’t distributed evenly. For instance, one treatment is tested on a list that’s never been called before and another treatment is tested on a list that that is months old.
  • Channel selection effect – In teleprospecting, your channel isn’t a pay-per-click advertisement or website; it’s the person who is making the call. Channel consistency is critical to ensuring test validity.

On a website, you can completely control the presentation of value. That’s impossible to do with phone calls. However, you can make them more consistent by:

  • Providing a detailed script for callers to follow.
  • Training them on how to use the script.
  • Recording all calls and listening to at least 50% of them to make sure tone and inflection are similar from call to call.

Consider every test a winner

Even if a test results in fewer conversions, you still haven’t wasted time or money. You’re just one step closer to understanding what works. In fact, sometimes we learn more from a losing test than a winning one.

Photo attribution: Cropbot

Related Resources:

Lead Generation: How using science increased teleprospecting sales handoffs 304%

Lead Gen: A proposed replacement for BANT

Landing page Optimization online course

Customer Connection: Does your entire marketing process connect to your customers’ motivations?

Landing Page Optimization: Addressing customer anxiety

Landing Page Optimization: Test ideas for B2B lead capture page

Rebecca Strally

About Rebecca Strally

Rebecca Strally, Senior Optimization Manager, MECLABS Rebecca helps plan tests and design experiments for Research Partners. Rebecca holds a B.A. in history from Mercer University, where she worked and volunteered at several museums. She is currently working on an MBA at the University of North Florida part time. In her free time, Rebecca enjoys reading comics and cheering at roller derby bouts.

Categories: Lead Generation Tags:



We no longer accept comments on the MarketingSherpa blog, but we'd love to hear what you've learned about customer-first marketing. Send us a Letter to the Editor to share your story.