Archive

Posts Tagged ‘conversion’

Factors Affecting Marketing Experimentation: Statistical significance in marketing, calculating sample size for marketing tests, and more

April 4th, 2023

Here are answers to questions SuperFunnel Cohort members put in the chat of recent MEC200 and MEC300 LiveClasses for ChatGPT, CRO and AI: 40 Days to build a MECLABS SuperFunnel (feel free to register at that link to join us for an upcoming MECLABS LiveClass).

How many impressions or how much reach do we need for statistical significance?

I can’t give you a specific number, because the answer will vary based on several factors (described below). Also, MECLABS SuperFunnel Cohort members now have access to a Simplified Test Protocol in their Hub, and you can use that tool to calculate these numbers, as shown in Wednesday’s LiveClass.

But I included the question in this blog post because I thought it would be helpful to explain the factors that go into this calculation. And to be clear, I’m not the math guy here. So I won’t get into the formulas and calculations. However, a basic understanding of these factors has always helped me better understand marketing experimentation, and hopefully it will help you as well.

First of all, why do we even care about statistical significance in marketing experimentation? When we run a marketing test, essentially we are trying to measure a small group to learn lessons that would be applicable to all potential customers – take a lesson from this group, and apply it to everyone else.

Statistical significance helps us understand that our test results represent a real difference and aren’t just the result of random chance.

We want to feel like the change in results is because of our own hand. It’s human nature. A better headline on the treatment landing page, or a better offer. And we can see the results with our own eyes, so it is very hard to understand that a 10% conversion rate may not really be any different than an 8% conversion rate.

But it may just be randomness. “Why is the human need to be in control relevant to a discussion of random patterns? Because if events are random, we are not in control, and if we are in control of events, they are not random, there is therefore a fundamental clash between our need to feel we are in control and our ability to recognize randomness,” Dr. Leonard Mlodinow explains in The Drunkard’s Walk: How Randomness Rules Our Lives.

You can see the effect of randomness for yourself if you run a double control experiment – split traffic to two identical landing pages and even though they are exactly the same, they will likely get a different number of conversions.

We fight randomness with statistical significance. The key numbers we want to know to determine statistical significance are:

  • Sample size – How many people see your message?
  • Conversions – How many people act on your message?
  • Number of treatments – For example, are you testing two different landing pages, or four?
  • Level of confidence – Based on those numbers, how sure can you be that there really is a difference between your treatments?

And this is the reason I cannot give you a standard answer for the number of impressions you need to reach statistical significance – because of these multiple factors.

I’ll give you an (extreme) example. Let’s say your sample size is 100 and you have four treatments. That means, each landing page was visited by 25 people. Three of the landing pages each get three conversions, and the other landing page gets four conversions. Since so few people saw these pages and the difference in conversions is so small, how confident are you that they are different? Or perhaps you randomly had one more motivated person in that last group that gave you the extra conversion.

And this assumes an even traffic split, which you may not want to do based on how concerned you are about the change you are making. As we teach in How to Plan Landing Page Tests: 6 Steps to Guide Your Process, “Using an uneven traffic split is helpful when your team is testing major changes that could impact brand perception or another area of your business. Although the results will take longer to reach statistical significance, the test is less likely to have an immediate negative impact on business.”

Now, let’s take another extreme example. Say your sample size is 10,000,000 and you have just a control and a treatment. The control gets 11 conversions, but the treatment gets 842,957 conversions. In that case, you can be pretty confident that the control and treatment are different.

But there is another number at play here – Level of Confidence (LoC). When we say there is a statistically significant difference, it is at a specific Level of Confidence. How sure do you want to be that the control and treatment are different? For marketing experimentation, 95% is the gold standard. But 90%, or even 80% could be a enough if it is a change that likely isn’t going to be harmful, and doesn’t take too many resources to make. And the lower Level of Confidence you are OK with, the lower sample size you need and the less difference in conversions you need to be statistically significant at that LoC.

So is Estimated Minimum Relative Difference our desired/target lift if our test performs as expected?

Once you understand how statistical significance works (as I described in the previous question), the next natural question is – well, how does this affect my business decisions?

The first answer is, this understanding will help you run marketing experiments that are more likely to predict your potential customers’ real-world behavior.

But second answer is – this should impact how you plan and run tests.

This question refers to the Estimated Minimum Relative Difference in the Simplified Test Protocol that SuperFunnel Cohort members receive, specifically in the test planning section that helps you forecast how long to run a test to reach statistical significance. And yes, the Estimated Minimum Relative Difference is the difference in conversion rate you expect between the control and treatment.

As discussed above, the larger this number is, the less samples and time (to get those samples) it takes to run a test.

Which means that companies with a lot of traffic can run tests that reach statistical significance even if they make very small changes. For example, let’s say you’re running a test on the homepage of a major brand, like Google or YouTube, which get billions of visits per month. Even a very small change like button color may be able to reach statistical significance.

But if you have lower traffic and a smaller budget, you likely need to take a bigger swing with your test to find a big enough difference. This does not necessarily mean it has to require major dev work. For example, the headlines “Free courtside March Madness tickets, no credit card required” and “$12,000 upper level March Madness tickets, $400 application fee to see if you qualify” are very quick changes on a landing page. However, they are major changes in the mind of a potential customer and will likely receive very different results.

Which brings us to risk. When you run valid experiments, you decrease the risk in general. Instead of just making a change and hoping for the best, only part of your potential customer base sees the change. So if your change actually leads to a decrease, you learn before shifting your entire business. And you know what caused the decrease in results because you have isolated all the other variables.

But the results from your experiments will never guarantee a result. They will only tell you how likely there will be a difference when you roll out that change to all your customers for a longer period. So if you take that big swing you’ve always wanted to take, and the results aren’t what you expect, that may rein your team in from a major fail.

As we say in Quick Guide to Online Testing: 10 tactics to start or expand your testing process, “If a treatment has a significant increase over the control, it may be worth the risk for the possibility of high reward. However, if the relative difference between treatments is small and the LoC is low, you may decide you are not willing to take that risk.”

With a test running past 4 weeks, how concerned are you about audience contamination between the variants?

Up until now we’ve been talking about a validity threat called sampling distortion effect – failure to collect a sufficient sample size. As discussed, this could mean your marketing experiment results are due to random variability, and not a true difference between how your customers will react to your treatments when rolled out to your entire customer set.

But there are other validity threats as well. A validity threat simply means that a factor other than the change you made – say, different headlines or different CTAs – was the reason for the difference in performance you saw. You are necessarily testing with a small slice of your total addressable market, and you want to ensure that the results have a high probability of replicability – you will see an improvement when you roll out this change to all of your potential customers.

Other validity threats include instrumentation effect – your measurement instrument affecting the results, and selection effect – the mix of customers seeing the treatments do not represent the customers that you will ultimately try to sell to, or in this case, the same customer seeing multiple treatments.

These are the types of validity threats this questioner is referring to. However, I think there is a fairly low (but not zero) chance of these validity threats only coming from running the test (not too much) past four weeks. While we have seen this problem many years ago, most major platforms have gotten pretty good at assigning a visitor to a specific treatment and keeping them there on repeat visits.

That said, people can visit on multiple devices, so the split certainly isn’t perfect. And if your offer is something that calls for many repeat visits, especially from multiple devices (like at home and at work), this may become a bigger validity threat. If this is a concern, I suggest you ask your testing software provider how they mitigate against these validity threats.

However, when I see your question, the validity threat I would worry about most is history effect, an extraneous variable that occurs with the passage of time. And this one is all on you, friend, there is not much your testing software can do to mitigate against it.

As I said, you are trying to isolate your test so the only variables that affect the outcome are the ones you’ve purposefully changed and are intending to test based on your hypothesis. The longer a test runs, the harder this gets. For example, you (or someone else in your organization) may choose to run a promotion during that period. Maybe you can keep a tight lid on promotions for a seven-day test, but can you keep the promotion wolves at bay in your organization for a full two months?

Or you may work at an ecommerce company looking to get some customer wisdom to impact your holiday sales. If you have to test for two months before rolling anything out, you may test in September and October. However, customers may behave very differently earlier in the year than they would in December, when their motivation to purchase a gift near a looming deadline is a much bigger factor.

While a long test makes a history effect more likely, it can occur even during a shorter test. In fact, our most well-known history effect case study occurred during a seven-day experiment because of the NBC television program Dateline. You can read about it (along with info about other validity threats) in the classic MarketingExperiments article Optimization Testing Tested: Validity Threats Beyond Sample Size.

Join us for a Wednesday LiveClass

As I mentioned, these questions came from the chat of recent LiveClasses. You can RSVP now to join us for an upcoming LiveClass. Here are some short videos to give you an idea of what you can learn from a LiveClass…

“If there’s not a strong enough difference in these two Google ads…the difference isn’t going to be stark enough to probably produce a meaningful set of statistics [for a marketing test]…” – Flint McGlaughlin in this 27-second video.

“…but that’s what Daniel was really touching on a moment ago. OK, you’ve got a [marketing] test, you’ve got a hypothesis, but is this really where you want to invest your money? Is this really going to get the most dollars or the most impact for the energy you invest?…” – Flint McGlaughlin, from this 46-second video about finding the most important hypotheses to test.

How far do you have to go to with your marketing to take potential customers from the problem they think they have, to the problem they do have? I discuss this topic while coaching the co-founders of an eyebrow beauty salon training company on their marketing test hypothesis in this 54-second video.

Effective Landing Pages: 30 powerful headlines that improved marketing results

August 8th, 2019

There are 21 psychological elements that power effective web design (see infographic). Of those elements, one of the first your customers will experience is the headline.

21 design elements

(You can download a PDF of this infographic here.)

 

A powerful headline is your make-or-break opportunity to connect with the customer and get them to engage with the rest of your page — and ultimately convert.

We’ll provide you oodles of examples of effective headlines in this MarketingSherpa blog post to help spark ideas as you brainstorm your own headlines. And you can delve deeper into all 21 of those psychological elements in the following videos from MarketingSherpa’s sister brand, MarketingExperiments:

The 21 Psychological Elements that Power Effective Web Design (Part 1)

The 21 Psychological Elements that Power Effective Web Design (Part 2)

The 21 Psychological Elements that Power Effective Web Design (Part 3)

(This article was originally published in the MarketingSherpa email newsletter.)

 

Now on to the examples …

Like with your own landing pages, in many of these examples the headline wasn’t the only factor that affected performance. However, a different headline is a pretty significant change on a website and is usually a major contributing factor to a change in performance. The best performing headlines below are bolded. The capitalization in these headlines represents the actual capitalization in the test.

Before: We’re here to help.
After: Simplifying Medicare for You
Results: 638% more leads

You can read more about the above headline in Landing Page Optimization: How Aetna’s HealthSpire startup generated 638% more leads for its call center

Before: About The GLS
After: Two Days of World-Class Leadership Training
Results: 16% increase in attendance

You can read more about the above headline in Customer-First Marketing: How The Global Leadership Summit grew attendance by 16% to 400,000

Read more…

Value Gulfs: Making sure there is differentiated product value when marketing upgrades and upsells

May 31st, 2018

unique value proposition in the marketplace is essential for sustainable marketing success. You must differentiate the value your product offers from what competitors offer. That is Marketing 101 (which certainly doesn’t always mean it’s done well or at all).

However, when you offer product tiers, it is important to differentiate value as well. In this case, you are differentiating value between product offerings from your own company.

This is a concept I call “value gulfs” and introduced recently in the article Marketing Chart: Biggest challenges to growing membership. Since that article was already 2,070 words, it wasn’t the right place to expand on the concept. So let’s do so know in this MarketingSherpa blog post.

When value gulfs are necessary

You need to leverage value gulfs in your product offers when you are selling products using a tiered cost structure. Some examples include:

  • A freemium business model
  • Free trial marketing strategy
  • Premium membership offering(s)
  • Good, better, best products
  • Economy paired with luxury offerings
  • Tiered pricing

The customer psychology of value gulfs

MECLABS Institute web designer Chelsea Schulman helped me put together a visual illustration of the value gulf concept:

Allow me to call out a few key points:

Read more…

Email Marketing: Five ideas to increase your email’s perceived value

August 16th, 2017

This article was partially informed by The MECLABS Guide for Optimizing Your Webpages and Better Serving Your Customers. For more information, you may download the full, free guide here.

Email messaging is a constant evolution of tiny tweaks and testing, always in search of the “perfect” formula to keep customers interested and clicking.

The ugly truth is, of course, that there is no perfect email formula. You will always need to test to see what is working — and what will continue to work for your customers.

You always need to be striving towards value. People will open your email and engage with it if they perceive that it will provide some value or service to them.

Marketers and customers shouldn’t be opposed — their issues, concerns and needs are yours as well. So it follows that when you focus on customer-centric tactics that put providing value before promoting your own product, engagement is bound to follow.

In fact, according to a MarketingSherpa online research survey conducted with 2,400 consumers, “the emails are not relevant to me” was chosen as the second most likely reason that customers would unsubscribe from a company’s email list.

This means that relevance and value is more important than ever when planning out your sends, and here are five ideas on how to do it:

Idea #1. Turn your email into a personal note, not a promotion

This is something that all marketers struggle with — we getting tunnel vision, focusing only on meeting certain goals instead of looking at the customer’s perspective and needs.

Read more…

Six Places to Focus to Make your Website a Revenue Generator

May 24th, 2016

We have more digital marketing channels than ever before, but it’s become even harder to connect with customers. In my role as chief evangelist for MECLABS Institute, MarketingSherpa’s parent company, I get to talk to marketers and thought leaders daily.

One thing’s become clear, that there is a growing divide between those who are fully engaged with digital marketing and those who are still figuring out the fundamentals. When I read the report by Kristin Zhivago, President of Cloud Potential, on “revenue road blocks,” I wanted to see what she’s discovered to help marketers quickly close this digital marketing gap and do better.

If marketers directly address getting six key focuses right, you can move forward and close the gap between digital and customers.

Brian: What inspired you to do your research on revenue road blocks?

kristin-zhivago-president-cloud-potential

Kristin: Actually, it was our day-to-day experience working with company managers that drove us to these conclusions, combined with our research on the best practices of digital market leaders in more than 28 industries. The gap between the companies that are successfully using the newer methods and those who are not is growing wider by the quarter.

What is really concerning is we are seeing otherwise solid, successful companies slipping behind their more digitally adept competitors, and they can’t figure out why. They’re doing what they’ve always done, and it’s not working anymore.

Of course, that’s the problem. Buyers have radically changed the way they buy, especially in the last couple of years, and these sellers haven’t changed the way they’re selling. Mobile and the cloud have changed everything; today’s buyers are not the obedient, pass-through-your-funnel buyers that we used to be able to depend on. They are looking for any excuse to say no, because they are sure that there’s another solution only a click away. There is absolutely no risk for them to reject you. In fact, rejection is the safest option for them.

Read more…

How a Single Source of Data Truth Can Improve Business Decisions

September 12th, 2014

One of the great things about writing MarketingSherpa case studies is having the opportunity to interview your marketing peers who are doing, well, just cool stuff. Also, being able to highlight challenges that can help readers improve their marketing efforts is a big perk as well.

A frustrating part of the process is that during our interviews, we get a lot of incredible insights that end up on the cutting room floor in order to craft our case studies. Luckily for us, some days we can share those insights that didn’t survive the case study edit right here in the MarketingSherpa Blog.

Today is one of those times.

 

Setting the stage

A recent MarketingSherpa Email Marketing Newsletter article — Marketing Analytics: How a drip email campaign transformed National Instruments’ data management — detailed a marketing analytics challenge at National Instruments, a global B2B company with a customer base of 30,000 companies in 91 countries.

The data challenge was developed out of a drip email campaign, which centered around National Instruments’ signature product, after conversion dropped at each stage from the beta test, to the global rollout, and finally, to results calculated by a new analyst.

The drip email campaign tested several of National Instruments’ key markets, and after the beta test was completed, the program was rolled out globally.

The data issues that came up when the team looked into the conversion metrics were:

  • The beta test converted at 8%
  • The global rollout was at 5%
  • The new analyst determined the conversion rate to be at 2%, which she determined after parsing the data set without any documentation as to how the 5% figure was calculated

Read the entire case study to find out how the team reacted to that marketing challenge to improve its entire data management process.

Read more…

Customer-Centric Marketing: How transparency translates into trust

May 23rd, 2014

Transparency is something that companies usually shy away from. From the customer’s perspective, that product or service just appears for them – simple and easy.

Marketing has a history of touting a new “miracle” or “wonder” product and holding up the veil between brand and consumer.

michael-norton-summitHowever, in Wednesday’s Web Optimization Summit 2014 featured presentation, Harvard Associate Professor Michael Norton brought up a different idea, speaking about how hard work should be worn as a badge of honor.

“Think about showing your work to customers as a strategy,” he said, coining it “The Ikea Strategy.”

The idea behind this is that when people make things themselves, they tend to overvalue them – think of all the DIY projects around the house. In the same vein, when people comprehend the hard work that has gone into a product, they are more likely to value it.

Michael gave the example of a locksmith he had spoken to as part of his research to understand the psychology of people who work with their hands. This man was a master locksmith, Michael said, and he started off by talking about how he used to be terrible at his job – he would go to a house, use the wrong tools, take an inordinate amount of time and sweat over the job.

Gradually, he became a master at his trade, and could fix the same problem quickly with only one tool. It didn’t matter that his work was superior because of his experience, his customers became infuriated when he handed over the bill. Even though the result was the same, the customers hadn’t seen the effort.

Independent of the service being delivered, Michael explained, we value the labor people put in.

“We like to see people working on our behalf,” he said.

He asked two questions on how to apply this in the marketing sphere:

  • Can this be applied to the online environment as well?
  • Can this be built into websites so people feel like these interfaces are working for them?

A counterintuitive mindset must be applied in this area. In many cases, rapid service or response comes second to transparency. Michael spoke about how his team ran a test where they purposefully slowed down the searc results for a travel site by 30 seconds.

“30 seconds of waiting online is like … 11 days. It’s an enormously long time,” he said.

But slowing something down like a search, he continued, makes people feel like the algorithm was working hard for them.

As surprising as it sounds, more customers picked the delayed search travel site because they perceived that it was working harder for them, he said.

Read more…

Social Sharing: Twitter has highest amplification rate, email has highest conversion rate

March 23rd, 2012

While researching an upcoming consumer marketing case study about SquareTrade, a provider of extended consumer electronics warranties that tied a referral program to the release of the latest iPhone, I had the chance to speak with Angela Bandlow, Vice President Marketing, Extole, a consumer-to-consumer social marketing company that creates social referral programs. (Note: You can sign up for the Consumer Marketing newsletter to receive the case study on SquareTrade once it’s published.)

Social referral programs allow companies to tap into their customer advocates to promote their brands, products and services by getting those customers to share within their social networks. These programs then track the shares through to the conversion, whether that is a sale, an opt-in or a coupon redemption.

Extole recently conducted research on 20% of its customer base with an average data collection length of 45 weeks, and this research uncovered some interesting data points on social sharing among different companies.

 

What to measure when tracking social sharing

“If you think about a referral program, it’s a little different in terms of what you would measure than a standard marketing program,” Angela explains.

She offers a few areas to track with referral programs:

  • How many of your customers are participating in your program? These people are called “advocates” at Extole.
  • Of the people participating, how many people do they share with, and through what marketing channel — email, Facebook, Twitter, personal URL (PURL), etc.  This metric is important because it shows the “amplification” of the message or call-to-action.
  • The number of social shares is the multiplication of the number of participants and the amount of sharing.
  • Clicks-per-share, or in other words, the rate of clicking with the social shares from your customers.

“You see a different rate of amplification across social channels versus email,” says Angela. “Email is always going to be a one-to-one share.”

Extole’s research found in aggregate its clients get 3.49 shares per advocate. In other words, everyone participating in a referral program is sharing with almost three and a half friends. On the high end, some advocates share with as many as 12 friends.

Here is a breakdown of some of the data points across several channels:

  • The largest percentage of advocate sharing is through email, and those shares get a 21% open rate, 80% clickthrough and 17% conversion (the highest conversion rate of any channel), which breaks down to .17 clicks per share.
  • Facebook shares average 1.24 clicks per share, but the conversion rate is only 1.21%.
  • Twitter actually averages 6.81 clicks per share, which creates the highest amplification rate of any channel.

 

Click to enlarge

 

 

This research also found an overall average of 42% clickthrough rate through social referral programs and almost five friend clicks per share for highly performing programs.

Angela also offers a couple of examples from different clients:

A video rental service company gets the majority of its shares through people who get a personal URL and share it through various channels through cutting and pasting. This referral program includes an incentive offer of a free one-night rental for the customer advocate and a first night free rental for the friends.

Extole has found that amplification is improved when the effort involves an incentive.

A food delivery service gets 70% of its shares through email and another 15% via Facebook. On Twitter, that company gets almost nine clicks for each tweet.

“We’ve always known that word-of-mouth marketing was very powerful, and converted at an estimated three to five times higher rates than other channels,” states Angela. She adds this research puts some data behind the marketing power of letting your customers drive conversions through their social networks and communication channels.

 

Related Resources:

Email Summit: Integrating mobile, social and email marketing channels

Social Media Marketing: Social login or traditional website registration?

Social Media Marketing: A look at 2012, part 1

Social Media Marketing: A look at 2012, part 2

Social Media Marketing: Analytics are free and plentiful, so use them

Using Social Sharing to Achieve Specific Email Goals: 5 Insights

New Chart: Increasing Reach through Social Sharing

Lead Nurturing: 12 questions answered on content, tactics and strategy

March 20th, 2012

B2B and other lead nurturing marketers are beset with challenges. Many are struggling to improve nurturing, scoring and alignment with the sales team, but they have a laundry list of questions.

I received 21 questions from the audience in recent a webcast for the American Marketing Association,The One-Two Punch of Effective Lead Engagement: Accurate Lists and Powerful Content” (a replay of the webcast is posted below). Yesterday, I answered nine of the questions in a post on the B2B Lead Roundtable Blog. Today, I am answering 12 more below.


Questions on content

Q: When your sales team consists of medical reps who sell to doctors and show up at their offices twice a month, how do you nurture? Especially considering doctors aren’t Internet savvy?

A: I disagree doctors aren’t Internet savvy; there are social networks for the medical community that engage a quarter of a million physicians. That said, equip your sales team to ask for each doctor’s preferred means of communication: email, video, executive summaries, reports, etc. It could be a simple questionnaire.

 

Q: Should we consider paying outside subject matter experts to develop educational content?

A: Leverage internal experts first to build authority. But be sure the content you’re sharing will be valuable even if the prospect never buys. If your content doesn’t meet that standard, then you’ll want to think about using third-party experts to fill the gap.

 

Q: If you keep sending your contacts repurposed content (although the same information), won’t they be annoyed?  Wouldn’t they prefer fresher info?

A: Research suggests it takes at least seven to nine interactions for a message to be remembered.  If you have a complex offering, your audience will appreciate you breaking it down and presenting it in a variety of ways so they can better understand it. We have to look at our content from our customers’ point of view, not our own. Don’t be afraid of repetition — embrace it.

 

Q: What’s the right amount of emails with video versus straight emails?

A: You need to know your audience and how they prefer to consume content. Test and measure.


Questions on tactics

Q: My team has auto-communications that go to prospects once a week for eight weeks, and we have a team of callers that supplement this. Do you believe this will help nurture/re-engage older leads?

A: It could. Here are some thoughts and ideas:

  • Nurturing is about building a relationship based on trust to continue a conversation. It’s not just about sending irrelevant information that could cause prospects to emotionally unsubscribe.
  • Examine the cadence of your emails to determine if once a week is too frequent. Nurturing is a marathon, not a sprint. Nurture them at least the length of your sales cycle.
  • Look at your results. How many opt-outs do you have? What are the call-to-lead conversion rates? How many opens and clickthroughs are your emails getting? The key is measurement.
  • These resources will help:

Five nurturing tips to create relevant and engaging emails

How ECI Telecom Developed a Content-Marketing Program from Concept to Completion and the Surprising Results

 

Q: How do you know which marketing tactic attracted your customer? Email? Direct Mail? Print? TV?

A:  That’s a challenge every marketer faces in the complex sale. The answer depends on whether you’re measuring first touch or last, and if you’re focused on gathering names or closing the deal immediately.  Leverage your CRM to capture every touch point: Have they attended a webinar, downloaded a whitepaper, or registered for a newsletter? All of these actions contribute, so measure all of them. Make sure your CRM allows you to track multiple campaigns.

 

Q: What is the best way to treat leads from a purchased list versus inbound leads?

A:  Your answer can’t be quickly summarized, in fact, a book could be written on the topic. However, these blog posts will help:

How to Build a Quality List and Make Data Drive Leads

Lead Generation Check list – Part 5: Treat your marketing database as a valued asset

Do you expect your inside sales team to practice alchemy?


Questions related to strategy

Q: Any thoughts on lead engagement for B2C versus B2B?

A: In B2B, more people are involved in the buying decision, but, ultimately, people buy from people and the lines between these groups have blurred. MarketingSherpa will soon release its first-ever lead generation benchmark report that includes feedback from more than 1,900 B2B and B2C organizations on their lead generation challenges. In the meantime, here are some resources:

Lead-Gen: Top tactics for a crisis-proof strategy

B2B vs. B2C: What does it really mean?


Q: How does lead-nurturing ROI compare for B2C (rather than B2B)?

A:  As I mentioned above, MarketingSherpa’s 2012 Lead Generation Benchmark Report will be published soon and will have a very detailed answer. Again, reference this post: Lead-Gen: Top tactics for a crisis-proof strategy


Q: Can you set up a simple lead nurturing strategy without lead scoring, and then add scoring later, when you have data to evaluate?

A: Absolutely. In the beginning, simplicity is best.

 

Q: What’s a good lead score for a technology company?

A: You’re in charge of developing your score based on your requirements. There’s no industry-wide scoring system. Here are some lead scoring resources that will help:

Lead Scoring: CMOs realize a 138% lead gen ROI … and so can you

The Lament of the Inside Sales Team: Data, Data Everywhere, but Who’s Ready to Buy?

How to Use Lead Scoring to Drive the Highest Return on Your Trade-Show Investment

Funnel Optimization: Why marketers must embrace change

 

Q: Do you have a buying process model and a list of stages of the sales cycle?

A: Please refer to Pages 7 and 17 in my free e-book: Start With A Lead: Eight critical success factors for lead generation

 

A link to a replay of the webcast is included below. Do you have additional questions? Feel free to ask them in the comments.

 

 

 

Related Resources:

The One-Two Punch of Effective Lead Engagement: Accurate Lists and Powerful Content

How to Get the CEO to Support Your Next Marketing Plan

B2B Marketing Research: 68% of B2B marketers haven’t identified their Marketing-Sales funnel … and it shows

Lead Scoring: CMOs realize a 138% lead gen ROI … and so can you

Marketing 101: What is conversion?

March 15th, 2012

I recently attended an event on social media for film and video professionals. There were four panelists: two social media experts and two video pros who are very active in using social media to market their work. The crowd ranged from very green on the topic to a few power users.

What stood out to me was that when the questions got started, one of the social media experts went off on a marketing riff and threw out the term “conversion.” A hand immediately shot up and asked, “What is conversion?”

Flat out the best question of the evening.

Sometimes as marketers, we get lost in a sea of acronyms — CRM, SEO, ROI, CTR, etc. — and it only took one word to remind me that not everyone gets all of these references.

To be a truly successful marketer, you want to be as transparent as possible as well as provide clarity. If your message is anywhere in the world of insider esoterica where the audience might be confused, that message is lost. And maybe worse than just ignored, the audience might even feel left out.

 

What is “conversion”?

The definition in the MarketingSherpa glossary that appears in MarketingSherpa handbooks defines conversion as, “The point at which a recipient of a marketing message performs a desired action.” In other words, conversion is simply getting someone to respond to your call-to-action. Read more…