Archive

Author Archive

Email Marketing: 3 reasons I was sucked into a Pandora win-back campaign

December 3rd, 2013

It is rare that I become sucked into an email campaign.

After a year of experience at MECLABS, I pride myself on being immune to catchy subject lines, blasted discount sends and tricky calls-to-action.

It’s actually become a bit of a game for me to watch the emails flood in and try to decide which companies are testing a new strategy, which ones are attempting to re-engage me, and which ones are just looking for a click.

In the last month, however, I have been completely taken in by a new personalized email campaign launched by Pandora.

The Internet radio giant has started what appears to be a new engagement campaign designed to pull in customers who have stopped responding to the normal subject lines and creative material.

Let me set the scene: It’s 15 minutes before my lunch hour on a Thursday afternoon and I’m staring listlessly at my Outlook inbox hoping no new projects come in before I manage to flee the office and get some food.

 

Hello sweet, sweet nostalgia

Suddenly, there’s a ding and a new email materializes at the top of the queue. It reads, “It All Started with Rilo Kiley Radio.” I’m hooked before I even have a chance to fully comprehend who the sender is.

Not only does the slightly vague expression pique my interest, but I’m also pulled in by the very specific mention of one of my favorite bands and the romantic nostalgia of the phrase “It All Started With … ” 

 

The subject line captured my attention, but the real beauty of this campaign is the personalized emotional plea found after the open.

 

A large graphic offers me a look back at my musical journey and Pandora’s marketing team proceeds to discuss not only specific songs I liked, but also provides the exact number of songs (4,140 and counting) they have enjoyed playing for me over the last eight years of our relationship together.

The first history email was followed by a two more emails all personalized and designed to make an emotional appeal to my love of certain bands. Here were the other two emails Pandora sent shortly after helping me look at our musical journey together.

 

Pandora send #2

Subject Line: “The Lumineers Radio Misses You”

 

Pandora send #3

Subject Line: “You Will Love These”

 

My full disclosure here is that I have opened and read them all.

So, what’s so different about this campaign and why did I get pulled into the Pandora win-back campaign when I have successfully resisted so many others?

I think I have narrowed it down to three main reasons.

 

Emotional appeal

In a sea of discounts, special deals and savings, the subject lines used by Pandora stand out because they manage to make a strong emotional appeal in a concise way.

This is a lesson worth taking back to the whiteboard for test design. In business, it is easy to become a servant to the bottom line and begin to think this is what appeals to your customers as well.

Deals are nice and I have never passed up a BOGO, but at the end of the day, I might be more willing to spend additional cash on something I’ve fallen in love with rather than something I need.

For example, in college, I once spent too much on a new designer purse and ate Ramen noodle cups for two weeks to make up the cash, because as I saw it:

The purse = Love

Food = Need

Rambling aside, if you can get the customer to fall in love at first glance, you might not need that coupon to get the sale.

 

Personalization

“What new thing can we personalize?” seems to be a big question in the marketing world these days, but Pandora approached personalization thinking instead, “How should we personalize?”

It went beyond a standard personalized salutation by showing me my listening history. While this level of personalization has the potential to be creepy, Pandora’s execution was not.

I would like to hypothesize this because Pandora managed to take all of the data it has on my preferences and made it into the story.

Customers don’t want to be a random collection of meaningless data. No one wants to be reduced to a series of numbers or a set of coded recommendations.

If you’re going to personalize an email campaign, attempt to really show the customer you know who they are.

There’s a popular expression at MECLABS that goes “people don’t buy from websites, people buy from people.” My twist on that when it comes to win-back campaigns is data sets don’t buy from people, people buy from people, and they deserve to be seen as such.

Read more…

B2B Marketing: 6 essentials for testing your teleprospecting

December 2nd, 2013

Originally published on B2B LeadBlog

For years, marketers have been testing messages on emails, websites and pay-per-click ads to determine which ones drive the most sales. At MECLABS, we’ve made this a science and have even patented a Conversion Heuristic to analyze the process.

A few months ago, we started applying this heuristic to a channel that is more than a century old – the telephone. MECLABS has its own leads generation group working with clients to help them drive more revenue through teleprospecting.

Last summer, we began applying what we learned from online testing to that channel and recently, Brian Carroll wrote about how using science increased teleprospecting sales handoffs 304%.

When I asked Craig Kasel, Program Manager, MECLABS, for a few insights into testing teleprospecting, he explained that testing can help deliver the right messaging to prospects.

“It’s a good idea to test your lead process to make sure you’re getting the appropriate messaging to the correct people,” Craig explained.

After speaking with Craig about some of the teleprospecting testing projects he’s been a part of at MECLABS to discover how B2B marketers could apply this science to their teleprospecting efforts, here’s our best advice from what we’ve learned so far.

Engage your call center

No testing will work if your callers aren’t completely on board with the idea.

To build the buy-in that produces accurate test results:

  • Involve them right away. They’ll know better than anyone else what messages (or treatments) are worth testing.

In fact, you may find some of your callers are probably already engaged in some form of informal testing.

  • Make sure they understand why they’re doing it, and why their role is so important. If they appreciate their purpose and are involved in creating the test, they’ll be more engaged and excited to help.

Build a simple structure

Determine the problem you’re trying to solve, the question that will help solve that problem, and the results that will help you answer the question.

We do this by developing a research plan, which has:

  • A primary research question – Which statement will help us reach a decision-maker faster?
  • A primary metric – Number of decision-makers reached.
  • A secondary metric – Number of sales handoffs.
  • A problem statement – Contacts were hesitant to provide the name of the decision maker.
  • Test hypothesis – We will find out which statement best encourages the contacts to give us decision-maker information.

Determine which approach to testing works best for your organization

  • Sequential tests – Callers test a single message for a time period, and then test another message for a time period.

Craig recommended sequential testing if you are going to have the same callers executing both tests.

“This is also the type of test we typically run because it’s easiest,” Craig explained. “If one of our lead generation specialists discovers a new approach they think works better, we let them try it and then measure the results.”

  • A/B split tests – Measure multiple messages simultaneously. This is better for larger call centers where you have the manpower to have separate people test separate messages in the same time frame.

Test one variable at a time

This way, when you see the result of the test, you will know precisely which variable – the general element you intend to test – influenced it. When you test multiple variables at once, you can’t isolate what caused the results.

Here’s an example of three test scripts based on the research plan above. The portions bolded are the elements of the message we tested.

Control

Hello, ___, my name is Jane and I am calling with The Widget Company.

We are currently the third-largest widget company in the nation offering competitive prices and solutions to make your job easier. When we last spoke, you told me that you use a consulting service to select widget support. Could I have your consultant’s information so the next time they choose widget support, we can be included in their evaluation?

Treatment A

Hello, ___, my name is Jane and I am calling with The Widget Company.

When we last spoke, you told me that you hired a consultant to select widget support. I wanted to let you know that we have a widget sale and I wanted to speak with your consultant to see if our sale on widgets would be a good fit for you. How can I reach them?

Treatment B

Hello, ___, my name is Jane and I am calling with The Widget Company.

When we last spoke, you told me that you work with a consultant to select widget support. Since we do not nationally advertise and may not have had the opportunity to work with your consultant, we would like to share our information with them. I would like to get your broker contact information in order to be in consideration when they next do their evaluations for you.

Validity starts with confidence

Level of confidence is a statistical term that you’ve reached a certain pre-established level of probability in a test. We want to minimize the chances that the difference in the metrics of interest between the treatments is due to random chance.

For example, a test with a 95% level of confidence has only a 5% chance that the observed difference is random chance.

Here are some of examples of validity threats that can negatively affect a test’s level of confidence.

  • Sample distortion effects – This happens when your sample of calls is too small to determine a 95% level of confidence in your testing.

A sufficient sample size depends on your existing success rate. For instance, if you’re measuring the number of sales leads, and your typical success rate is two leads for every 100 calls, then making 500 calls will give a better estimate of your true lead rate than only making 200 calls.

The lower your existing success rate is, the more people you will have to call to achieve a valid test.

Also, it is possible to work with small sample sizes, but the caveat here is your tolerance for risk when making business decisions based on less confidence in your sample pool.

  • List pollution effect – You can’t run a new test or treatment by the same list. The list has to be fresh to each test. For example, if you need 500 contacts to achieve validity, you can’t call a list of 250 people twice.
  • History effect – This happens when tests are too drawn out so influences outside the treatment are more likely to skew results. With A/B testing, you will avoid this since both tests run simultaneously. Try to compress the time span of your testing. We prefer one to two weeks.
  • Selection effect – This happens when test subjects aren’t distributed evenly. For instance, one treatment is tested on a list that’s never been called before and another treatment is tested on a list that that is months old.
  • Channel selection effect – In teleprospecting, your channel isn’t a pay-per-click advertisement or website; it’s the person who is making the call. Channel consistency is critical to ensuring test validity.

On a website, you can completely control the presentation of value. That’s impossible to do with phone calls. However, you can make them more consistent by:

  • Providing a detailed script for callers to follow.
  • Training them on how to use the script.
  • Recording all calls and listening to at least 50% of them to make sure tone and inflection are similar from call to call.

Consider every test a winner

Even if a test results in fewer conversions, you still haven’t wasted time or money. You’re just one step closer to understanding what works. In fact, sometimes we learn more from a losing test than a winning one.

Photo attribution: Cropbot

Related Resources:

Lead Generation: How using science increased teleprospecting sales handoffs 304%

Lead Gen: A proposed replacement for BANT

Landing page Optimization online course

Customer Connection: Does your entire marketing process connect to your customers’ motivations?

Landing Page Optimization: Addressing customer anxiety

Landing Page Optimization: Test ideas for B2B lead capture page