Archive

Posts Tagged ‘metrics’

Marketing Metrics: Is the emphasis on ROI actually hurting Marketing?

October 21st, 2011

In speaking with many, many marketers over the past year, two words — well, actually one word and one acronym — stand out in my mental word cloud when thinking about marketing in 2011: revenue and ROI (return on investment).

The first is a term more commonly seen in financial reports and tossed around the conference table during company meetings. The second is another financial term.

And I’m not just dreaming that these words have infiltrated marketing. Research from the 2012 B2B Marketing Benchmark Report found that 54% of surveyed marketers think “achieving or increasing measurable ROI from lead generation programs” is a top strategic priority for 2012.

Click to enlarge

I know I’ve written about Marketing proving its worth within the company in terms of revenue generation or measuring ROI more than once over the last year.

Menno Lijkendijk, Director Milestone Marketing, a Netherlands-based B2B marketing company, says the emphasis on ROI in marketing should be reexamined.

Menno’s main point is unimpeachable — return on investment is a financial term with a specific definition that has a very specific meaning to the C-suite in general, and particularly to the CFO. His concern is, not only is the actual ROI of some marketing activities overemphasized, the term itself is gathering too much marketing “buzz.”

He provided an example of a comment left on an online video of his that referenced “intangible ROI,” something he (rightly) says does not exist.

“There is no such thing as intangible ROI. The whole definition of ROI is that it should be tangible,” Menno says.

He continues, “This term — ROI — is now starting to lead a life of its own, and is being used by email service providers to explain to their potential customers that doing business with them will give them great ROI on their marketing investment.” Menno also mentions email providers are not alone in using this sales pitch and cited search marketers, social media markers and other agencies.

“There is more than just ROI, and the real value of marketing may require a different metric, or a different scorecard, than just the financial one,” he states.

Read more…

Reader Mail: Understanding differences in clickthrough rates and open rates

August 12th, 2011

Recently, my colleague Brad Bortone forwarded me an inquiry from one of our readers, who asked the following:

Can you provide any insight into why my newsletter emails would receive a 10% unique CTR and a 3% open rate? Aren’t open rates generally the larger number?

We use XXXXXXXX as our email service provider. Could this be related to how our newsletter renders in the preview pane of email clients?

In thinking about this, I realized that many email marketers may be asking the same questions, and could benefit from an extensive reply. Besides, I don’t get much mail around here, so I was excited to help out.

Here is what I wrote in my initial reply: Read more…

Marketing Metrics: Aligning ROI goals across the enterprise

May 26th, 2011

More than 80 percent of CMOs were dissatisfied with their ability to measure marketing ROI and less than 20 percent said their company employed meaningful metrics according to a CMO council study quoted in Harvard Business Review.

The same article cited Copernicus Marketing Research findings that noted that most acquisition efforts fail to break even, no more than ten percent of new products succeed, most sales promotions are unprofitable, and advertising ROI is below four percent.

There is no absence of metrics or measurement tools. The problem is less one of analytics than of lack of alignment across the enterprise as to ROI goals.

But how can that alignment be attained?

There is a need for a common vocabulary and shared buy-in as to key performance indicators (KPIs). It is commonly assumed that all can be resolved if the VP of Sales and the VP of Marketing just go off and have a beer together. This rarely works. A better way to achieve alignment is to borrow from the toolbox of strategic planning and to use scenarios.

Scenario planning is a discipline popularized by Royal Dutch Shell in the ‘80s that has become a standard tool of strategic planning professionals. It is the process in which managers invent and consider the implications of alternate assumptions and futures. As a team-building exercise, it can remove the barriers that office politics, turf wars, and loyalty to current vendors bring to the effort to align goals and assumptions.

As consultant Juergen Daum has written, “The purpose of scenario planning is to help managers to change their view of reality, to match it up more closely with reality as it is, and reality as it is going to be. The end result, however, is not an accurate picture of tomorrow, but better decisions about the future.”

Scenario planning session

A Scenario planning session can be done over a one- or two-day off-site:

  • Start by modeling a scenario in which the current ROI goals and benchmarks are accurate and lead to a positive future. This is the “rosy scenario” that is implicitly guiding current thinking.
  • The team can then turn, in a politically non-threatening way, to alternate scenarios – those in which current goals and assumptions can be challenged. This process surfaces doubts and uncertainties while clarifying disconnects among the team as to definitions and priorities. Whatever the outcome, the very process builds agreement and understanding.

This process allows managers to confront, without defensiveness, the essential question:

“What if our current assumptions and procedures are wrong?”

Contemplating a scenario without a “rosy” outcome forces participants to both question current practices and to work together to forecast outcomes. The implications of using “wrong” ROI goals can be discussed collaboratively, fostering collaboration and understanding. Ideally, new metrics can be identified and outmoded ones discarded. Inevitably, participants emerge with greater understanding of their goals, of their key performance indicators, and of each other.

Bob Heyman is a keynote speaker at Optimization Summit 2011, and all attendees will receive a copy of his book, “Marketing by the Numbers: How to Measure and Improve the ROI of Any Campaign,” provided by HubSpot.

Gary Angel, President and CTO, Semphonic, contributed heavily to this blog post as well.

Related Resources

Digital Marketing: How to measure ROI from your agencies

Lead Marketing: Cost-per-lead and lead nurturing ROI

New Chart: What Social Metrics are Organizations Monitoring and Measuring?

Maximize your Agency ROI

Digital Marketing: How to measure ROI from your agencies

May 17th, 2011

agency watch dogToday’s marketing world is incredibly complex. The growth of digital has dramatically expanded the number of channels and customer touch points that require marketing attention, and it isn’t just a question of number. Digital channels often involve unique skills, unique technology and unique culture. Combining SEO expertise with great digital creative plus Facebook smarts and traditional media buying isn’t difficult, it’s pretty much impossible.

Inevitably, you’re faced with a world where you need to rely upon, direct, manage and motivate multiple agency partners. To do that – and to understand how to allocate resources between channels, how to decide if an agency is giving you all they can, and how to choose where to invest your time and resources – takes sophisticated measurement. You can’t manage what you don’t measure – this statement is as true for your agency relationships as it is for your marketing dollars.

In a world where there are lies, damn lies, and statistics, why would you let your agencies measure their own performance? If your agencies are siloed, they have every incentive (and ability) to make their channel look maximally successful. If you’ve concentrated everything in a single agency, that agency has every incentive (and ability) to make their entire program look successful and not delve too deeply into any single piece.

In today’s environment, measurement is just too important to leave to the wolves.

Intra-Agency Measurement suffers from four BIG problems:

  • Skill Set: For most agencies, measurement is just grafted onto a creative culture. It isn’t their business, core expertise or focus and isn’t what makes them money.
  • Bias: It doesn’t take evil intent to create bias. One of the great challenges of measurement is the temptation to always pass on good news. When the analyst has a self-interested stake in the measurement, this problem is that much worse.
  • Siloed View of the World: Even the best measurement an agency can provide is typically limited to their world and their tools. They see only their slice of the pie – meaning that cannibalization, cross-channel, and customer issues are invisible to them.
  • Standardization: Every industry has evolved its own way of talking about measurement and they are all different. Nobody agrees what engagement means or how ROI metrics should be applied to them. Vendors have reports and technology that are narrowly adapted to their own language and techniques and cannot be standardized.

What’s the right solution?

You need a “Digital Watchdog” – either an analytics agency of record or an internal employee or department tasked with making sure that every channel you use has the right measurement, the right standards, and the right level of resources and attention.

A Digital Watchdog should be focused explicitly on measurement, measurement tools and measurement skills. That guarantees you a culture based on measurement and an appropriate skill-set to solve your measurement challenges. A Digital Watchdog should have NO vested interested in your spend. They should not manage ANY media budget or have any stake in which channels you invest in or use.

That’s what you should expect of a Digital Watchdog. Here’s what they should expect of you.

A Digital Watchdog needs to be given a cross-channel view of your customers and measurement. They need to see and have access to all your marketing spend and agency reporting. A Digital Watchdog needs to be able to create or collaborate on the creation of a comprehensive view of measurement standardization. As long as you allow each channel to measure itself its own way, you can’t expect ANYONE to make sense of the whole picture.

There are some key steps when it comes to getting started with a Digital Watchdog. Usually, you’ll start with review of the measurement in-place for each channel – is it complete, accurate, and robust? Having the basic measurement infrastructure in place (and knowing it’s right) is essential.

The second step is typically the creation of a standardized measurement framework (based on segmentation) that can be applied to every channel. Useful measurement begins with audience segmentation and drives across your business naturally – not by forcing your business into artificial measurement constructs.

Once you’ve got a good framework in place, it’s time to execute both Media-Mix and Attribution Modeling to understand spending interactions and optimization. Media-Mix Modeling is your best tool for deciding how moving the levers of marketing spend by channel drive total business results. Attribution Modeling helps you understand how channels work in harmony (or at cross-purposes) when it comes to acquisition, engagement and conversion.

At the same time, you’ll want to identify the holes and gaps where your agency measurement isn’t adequate, where their performance is sub-optimal, or where you’re not getting the attention you deserve. Your Digital Watchdog should drive channel-specific optimizations for “problem” agencies and help you evaluate how to get more from your relationships.

With the dollars at stake in today’s marketing world, there’s just too much at stake to count on your agencies doing the right thing with measurement. They are in the wrong place, with the wrong tools, the wrong motives and the wrong skill sets to do the job right.

Bob Heyman is a keynote speaker at Optimization Summit 2011, and all attendees will receive a copy of his book, “Marketing by the Numbers: How to Measure and Improve the ROI of Any Campaign,” provided by HubSpot.

Gary Angel, President and CTO, Semphonic, contributed heavily to this blog post as well.

Related Resources:

Optimization Summit 2011

Marketing Strategies: Is performance-based vendor pricing the best value?

New Chart: What Social Metrics are Organizations Monitoring and Measuring?

Maximize your Agency ROI

Photo attribution: Randy Robertson

Social Media Measurement: Moving forward with the data and tools at hand

April 29th, 2011

Social media measurement is in its early phases, and marketers need to decide whether to parse the social media cacophony, much like a radio astronomer, gathering as much data as possible to discern the signs of life or selectively focus on a small, but sufficiently meaningful set of metrics.

The word “sufficient” can span a wide spectrum, and determining what is sufficient is perhaps the question that marketers must answer.

In some sense, you really don’t have a choice. How much data you can afford to collect and analyze is limited by your organization’s budgetary and human resources. If you are not already collecting enough data for “big” analytics (”Approach 1” that I described in my last blog post), it makes sense to get the most out of what you have now relatively quickly, and in the process learn what additional data you need.

I spend a significant amount of time in digital photography, and my friends often ask me for advice on what camera to buy as they are getting more “serious.” My answer is always the same—first, get the most out of the camera you have. Once you start appreciating what your camera lacks, then you can start thinking about investing into those specific features.

In the same sense, getting started is critical. Reading blog posts will not give you a concrete sense of social media (SoMe) measurement until you get your own hands on a monitoring tool—even if you start by  manually listening to conversations using RSS feeds, Twitter, Google Alerts, and the like.

Second, you need to clearly identify your objectives. In our own research project on SoMe measurement with Radian6, I am leaning toward focusing on best practices for specific scenarios—e.g., a Facebook company page—to deal with manageable amounts of data and produce results on a realistic timeline.

So for those not quite ready for “big” analytics, let’s take a look at a quick start approach…

Approach 2: A microscope, not a radio telescope

Commit to a set of metrics you’ll be accountable for, and stick with them. This is a far more pragmatic approach that does not require that every kind of data is available to be measured. If it appears that this approach is not scientific, that is not the case. While focusing on a smaller number of metrics does not paint the whole picture the way that the first approach does, trending data over time can be highly valuable and meaningful in reflecting the effectiveness of marketing efforts.

Taking into account the marginal time, effort, and talent required to process more data, it makes economic sense to focus on a smaller number of data points. With fewer numbers to crunch, marketers armed, for example, only with data available directly from their social media management tools, can calibrate their marketing efforts against this data to build actionable KPIs (key performance indicators).

During Social Media Week, NYC-based Social2b’s Alex Romanovich, CMO, and Ytzik Aranov, COO, presented a straightforward measurement strategy rooted in established, if not venerated, marketing heuristics, such as Michael Porter’s Value Chain Analysis. Their core message is to appreciate that different social media KPIs will be important not only to different companies and industry segments, but “these KPIs also have to align well with more traditional metrics for that business – something that the C-Level and the financial community of this company will clearly understand.

Alex stresses that “the entire ‘value chain’ of the enterprise can be affected by these metrics and KPIs – hence, if the organization has a sales culture and is highly client-centric, the entire organization may have to adapt the KPIs used by the sales organization, and translated back to the financial indicators and cause factors.

This approach should immediately make sense to marketers, even without any knowledge of statistical analysis.

Social2B focuses not only on the marketing, but also on the customer service component of SoMe ROI, and here is Ytzik’s short list of steps for getting there:

  1. Define the social media campaign for customer service resolution
  2. Solve for the KPI and projections
  3. Apply Enterprise Scorecard parameters, categories
  4. Solve for risk, enterprise cost, growth, etc.
  5. Map to social media campaign cost
  6. Solve for reduction in enterprise costs through social media
  7. Justify and allocate budget to social media

An important element here is the Enterprise Scorecard—another established (though loosely defined) management tool that is often overlooked even by large-scale marketing organizations. Given the novelty of SoMe, getting it into the company budget requires not only proving the ROI numerically, but also speaking the right language. Ytzik’s “C-level Suite Roadmap” might appear simple, but it requires that corporate marketers study up on their notes from business school:

  • Engage in Compass Management (managing and influencing your organization vertically and horizontally in all directions)
  • Define who owns the Web and social media within the company
  • Identify the enterprise’s value chain components
  • Understand the enterprise’s financial scorecard

Again, no statistics here—it is understood that analysis will be required, but these tools will put you in a good position when the time comes to present your figures.

How to get started

Finally, I wanted to get as pragmatic as possible to help marketers get started and not get stuck in a data deluge. Here are Social2B’s top 10 questions to ask yourself before you scale your SoMe programs:

  1. Is my organization and my executive management team ready for social media marketing and branding?
  2. Does everyone treat social media as a strategic effort or as an offshoot of marketing or PR/communications?
  3. Where in the organization will social media reside?
  4. Will I be able to allocate sufficient budget to social media efforts in our company?
  5. How will social media discipline be aligned with HR, Technology, Customer Service, Sales, etc.?
  6. What tools and technologies will I need to implement social media campaigns?
  7. Will ‘social’ also include ‘mobile’?
  8. How will we integrated SoMe marketing campaigns with existing, more ‘traditional’ marketing efforts?
  9. How much organizational training will we need to implement in integrating ‘social’ within our enterprise?
  10. Are we going to use ‘social’ for advertising and PR/Communications? What about ‘disaster recovery’ and ‘reputation management’?

Related Resources

Social Media Measurement: Big data is within reach

2011 Social Marketing Benchmark Report – Save $100 with presale offer (ends tomorrow, April 30)

Always Integrate Social Marketing?

Inbound Marketing newsletter – Free Case Studies and How To Articles from MarketingSherpa’s reporters

Social Media Measurement: Big data is within reach

April 28th, 2011

Should marketers wait for a grand unified theory of social media ROI measurement, or confidently move forward with what they have available to them now?

This question has been at the forefront of my thinking, as we proceed with MarketingSherpa’s joint research project with Radian6 to discover a set of transferable principles, if not a uniform formula to measure social media (SoMe, pronounced “so me!”) marketing effectiveness.

As I have written previously, some of the popular measurement guidelines provide a degree of comfort that comes from having numbers (as opposed to just words and PowerPoint® slides), but fail to connect the marketing activity to bottom-line outcomes.

To help think through this, I spoke with several practitioners to get some feedback “from the trenches” during SoMe Week here in NYC. With their help, I broadly defined two approaches.

Approach 1: Brave the big data

Take large volumes of diverse data, from both digital and traditional media, and look for correlations using “real” big-data analysis. This analysis is performed on a case-by-case basis, and the overarching principles are the well-established general statistical methods, not necessarily specifically designed for marketers.

Pros

  • The methodologies are well established
  • There are already tools to help (Radian 6, Alterian, Vocus, etc)

Cons

  • Most marketers are not also statisticians or have the requisite tools (e.g., SAS is an excellent software, but it comes with a premium price)
  • Comprehensive data must be available across all relevant channels, otherwise the validity of any conclusions from the data rapidly evaporates (Radian6 announcement of integrating third-party data streams like Klout, OpenAmplify and OpenCalais in addition to existing integration with customer relationship management (CRM), Web analytics, and other enterprise systems certainly helps)
  • In the end, it’s still conversation and not conversion without attribution of transactional data

If the volume of data becomes overwhelming, analytical consulting companies can help. NYC-based Converseon does precisely that, and I asked Mark Kovscek, their SVP of enterprise analytics, about the biggest challenges to getting large projects like this completed efficiently. Mark provided several concrete considerations to help marketers think through this, based on Converseon’s objectives-based approach that creates meaningful marketing action, measures performance, and optimizes results:

  • Marketers must start with a clear articulation of measurable and action-oriented business objectives (at multiple levels, e.g., brand, initiative, campaign), which can be quantified using 3-5 KPIs (e.g., Awareness, Intent, Loyalty)
  • Large volumes of data need to be expressed in the form of simple attributes (e.g., metrics, scores, indices), which reflect important dimensions such as delivery and response and can be analyzed through many dimensions such as consumer segments, ad content and time
  • The key to delivering actionable insights out of large volumes of data is to connect and reconcile the data with the metrics, with the KPIs, and with the business

How much data is enough? The answer depends on the level of confidence required.  Mark offered several concrete rules of thumb for “best-case scenario” when dealing with large volumes of data:

  • Assessing the relationship of data over time (e.g., time series analysis) requires two years of data (three preferred) to accurately understand seasonality and trend

–   You can certainly use much less to understand basic correlations and relationships.  Converseon has created value with 3-6 months of data in assessing basic relationships and making actionable (and valuable) decisions

  • Reporting the relationship at a point in time requires 100-300 records within the designated time period (e.g., for monthly listening reporting, Converseon looks for 300 records per month to report on mentions and sentiment)

–   This is reasonably easy when dealing with Facebook data and reporting on Likes or Impressions

–   However, when dealing with data in the open social graph to assess a brand, topic or consumer group, you can literally process and score millions of records (e.g., tweets, blogs, or comments) to identify the analytic sample to match your target customer profile

  • Assessing the relationship at a point in time (e.g., predictive models) requires 500-1000 records within the designated time period

Understanding the theoretical aspects of measurement and analysis, of course, is not enough. A culture of measurement-based decision making must exist in the organization, which means designing operations to support this culture. How long does it take to produce a meaningful insight? Several more ideas from Converseon:

  • 80% of the work is usually found in data preparation (compiling, aggregating, cleaning, and managing)
  • Reports that assess relationships at a single point in time can be developed in 2-3 weeks
  • Most predictive models can be developed in 4-6 weeks
  • Assessing in-market results and improving solution performance is a function of campaign timing

Finally, I wanted to know what marketers can do to make this more feasible and affordable. Mark recommends:

  • Clearly articulate business objectives and KPIs and only measure what matters
  • Prioritize data
  • Rationalize tools (eliminate redundancy, look for the 80% solution)
  • Get buy-in from stakeholders early and often

In my next blog post on this topic, I’ll discuss an approach to SoMe measurement that trades some of the precision and depth for realistic attainability—something that most marketers that can’t afford the expense or the time (both to learn and to do) required to take on “big data.”

Related Resources

Social Media Marketing: Tactics ranked by effectiveness, difficultly and usage

Always Integrate Social Marketing?

Inbound Marketing newsletter – Free Case Studies and How To Articles from MarketingSherpa’s reporters

Social Marketing ROAD Map Handbook

Marketing Strategies: Is performance-based vendor pricing the best value?

April 12th, 2011

Every advertising agency, SEO specialist, and PR firm likes to be seen as a partner, not a vendor. And that may well define your relationship. But, go down to accounting and explain that relationship, and they’ll laugh in your face.

And for good reason. While, hopefully, you do have that close knit partner relationship, at the end of the day, this is a financial arrangement and you must maximize the value of that arrangement.

On the face of it, performanced-based pricing seems like a no-brainer. You get a guaranteed result, or you don’t pay.

Is this a great country, or what?

Like many things, the devil is in the details. First of all, you have to keep in mind that the vendor knows the metrics far better than most prospective clients do. That means, in many cases, the vendor is selling the illusion of risk.  Second, and more importantly, you have to be sure the result you are paying for is the result you really want.

Let me show you what I mean. I’ll use a teleprospecting vendor as an example, and highlight the lesson you can get out of each example for the type of vendors you work with every day.

What intermediate metrics truly contribute to your success?

In B2B lead generation, a common result is defined as an appointment for sales people. The cost per appointment generally runs from about $400 to $800, depending typically on volume, your brand and the target.  If you can provide the vendor with the people your sales team absolutely, positively wants appointments with, you’re in business.

In my case, I would gladly take appointments with CMOs of B2B companies with $500 million or more in revenue. At least, that would probably be my immediate response. Of course, there might be a few CMOs in that target that oversee pure e-commerce plays, or highly commoditized, low-end products that do not require lead generation, my area of expertise (or, so I would like to think). Therefore, I might pay for some appointments that I don’t really want. So, the real cost for a qualified appointment might be a bit higher than I originally agreed to.

Then there is the hidden cost: sales productivity. The purpose of such services is to increase sales productivity. For these kinds of top executive-level appointments, the representative might very well expect to meet face-to-face with the CMO. So, you have to add to the equation the cost of the commuting time and meeting time. Loaded field sales costs for complex solutions often start at about $100 an hour and can be $500 an hour or more, for elite, high-end key account sales people.

Very quickly, a $500 appointment can become an $800 or even $1,500 appointment, especially if any serious commuting takes place. If the conversion-to-deal is high or the revenue-per-deal is high, then who cares? In many cases, however, buyers find out that 20 to 30 percent of the appointments are not a fit. Now the cost of the qualified appointment goes way up, and the soft cost of sales expense goes to the moon, not to mention the hit on sales productivity.

Unless you are absolutely certain that your sales team wants appointments with a particular set of individuals, then you really need to focus more on qualified leads, not just appointments.

LESSON LEARNED: Make sure you pick the correct intermediate metrics when paying for performance.

Are you helping  your vendors be successful?

OK, now you have learned your lesson, the hard way. You won’t do that again, right? So you negotiate a cost per lead fee structure. Before you do, you wisely work with sales to define BANT (Budget, Authority, Need and Timeline) lead criteria and structure the deal accordingly. Again, the devil is in the details. What if sales discovered, after further review, that what they really wanted was to get in to larger accounts before the prospect had finalized a budget? In those cases, maybe the deal takes longer but the win rate is higher and the deal size is higher. Happens all the time. Now you have to try to change the deal. At least for some accounts.

With leads, there is also often subjective information, open to interpretation. Is the prospect really acting with authority? Do they really have a budget? Even seasoned sales people can be mistaken about such things. In short, lead qualification is almost always nuanced, complex and evolving, as the teleprospecting operation figures out how to qualify leads precisely and the sales organization figures out what it really wants and needs. This reality often creates conflict with the vendor initially, because the fee structure negotiated is not really the right fee structure and so one side or the other loses.

Finally, if the vendor is taking all the risk, many people understandably put vendor support on the back burner. It’s human nature. In reality, teleprospecting operations fail, including those that are in-house, without proper support from marketing and sales. For example, from marketing, this operation needs lists, assets and tools, and an appropriate supply of reasonably qualified responders. From sales, the team needs training and mentoring on qualification and precise, rapid feedback on leads..

After all, the fee is fixed and the operation should run on auto-pilot. You also might not bother investing in effective demand generation that feeds the vendor or even list development, instead allowing the vendor to get by on cold-calling decaying lists.

Your program then becomes the dumping ground for new hires. The vendor might also park underperformers there before giving them their walking papers. In other words, both you and the vendor try to extract some value out of the effort. But, some of what matters isn’t getting measured, like the cost in the market place to your brand because of the quality of the calling.

LESSON LEARNED: A business relationship is a two-way street. Your vendor can’t help you be successful, if you don’t help it be successful. As Jerry Maguire said, “Help me help you!”

Is there transparency in your relationship?

So, what’s the right approach? It really depends on what you need and how clear you are about your needs. If you have a reasonably well-oiled, well-documented process and approach to teleprospecting, then asking the vendor to share in the risk and the upside can serve your mutual long-term interests.

If things are not going so well and you need to figure out the right approach, then pay-for-performance is going to create unnecessary conflict. You might be better served in that case to put your focus on determining the right model or strategy for teleprospecting and the parameters of a pilot. Insist on a level of transparency during the pilot and then use the pilot to optimize the approach. Then, after the production level has begun to plateau, start working on a shared risk model.

The right shared risk fee structures ensure that both the vendor and the client win if the program is working and lose if the program is failing. To arrive at such an arrangement, there must be clarity on both sides about mutual obligations and the consquences for non-compliance. Mutual trust and respect are also necessary, including a win-win approach to the fee structure.

To those who might argue that every dollar of profit a vendor makes is a dollar of margin that is lost to its clients, I would point to the free enterprise system. Everywhere in free markets, the quest for profits drives higher levels of efficiency (and losing money drives companies out of markets and out of business). If the vendor makes above average profits for driving above average efficiency, then its clients are the beneficiaries. And the profits that the vendor makes must always be tempered by what its competitors offer or what its clients believe they can achieve in-house.

LESSON LEARNED: A rising tide lifts all boats…as long as everyone is clear on how “tide” and “boat” are defined in the process. So, before you dive in, dip your toe in and start with a pilot that has flexibility to evolve over time. Once the proper success metrics have been discovered, and a working relationship is established, you can create a more successful payment model that truly shares risk and reward.

But don’t stop there. Look at this as an evolving fee model. Continue to optimize as you learn more about what creates a mutually successful relationship.

Related Resources

B2B Marketing: The 7 most important stages in the teleprospecting funnel

B2B Lead Generation: Why teleprospecting is a bridge between sales and marketing

B2B Marketing: The FUEL methodology outlined

Free MarketingSherpa B2B Newsletter

Marketing Research: How asking your customers can mislead you

February 25th, 2011

In a recent blog post for our sister company MarketingExperiments, I shared my experiences at the fifth Design for Conversion Conference (DfC) in New York City. Today, I want to focus on a topic from Dr. Dan Goldstein’s presentation, and its relevance to usability and product testing for marketers — how focus group studies can effectively misrepresent true consumer preferences.

Asking you for your input on our Landing Page Optimization survey for the 2011 Benchmark Report has firmly planted the topic of surveys at the forefront of my thinking.

Calibration is not the whole story

The need to calibrate focus group data is well recognized by marketers and social scientists alike. The things marketers want to know the most – such as “intent to purchase” – is more obviously susceptible to misleading results. It’s easy to imagine that when people are asked what they would do with their money in a hypothetical situation (especially when the product itself is not yet available), naturally their answers are not always going to represent actual behavior when they do face the opportunity to buy.

However, mere calibration (which is a difficult task, requiring past studies on similar customer segments, where you can compare survey responses to real behavior) is not enough to consider. How we ask the question can influence not only the answer, but also the subsequent behavior, about which the respondent is surveyed.

Dr. Goldstein pointed me to an article in Psychology Today by Art Markman, about research into how “asking kids whether they plan to use drugs in the near future might make them more likely to use drugs in the near future.” Markman recommends that parents must pay attention to when such surveys are taken, and make sure that they talk to their children both before and after to ensure that the “question-behavior effect” does not make them more likely to engage in the behaviors highlighted in the surveys. The assumption is that if the respondent is aware of the question-behavior effect, the effect is less likely to work.

Question-Behavior Effect: The bad

If your marketing survey is focused on features that your product or service does not have—whether your competitors do or do not—then asking these negative questions may predispose your respondents against your product, without them even being aware of the suggestion. This is especially worrisome when you survey existing or past customers, or your prospects, about product improvements. Since you will be pointing out to them things that are wrong or missing, you run a good chance of decreasing their lifetime value (or lead quality, as the case may be).

Perhaps the survey taker should spend a little extra time explaining the question-behavior effect to the respondent before the interaction ends, also making sure that they discuss the product’s advantages and successes at the end of the survey. In short, end on a positive.

Question-Behavior Effect: The good

However, there is also a unique opportunity offered by the question-behavior effect: by asking the right questions, you can also elicit the behavior you want. This means being able to turn any touch point—especially an interactive one like a customer service call—into an influence opportunity.

I use the word “influence” intentionally. Dr. Goldstein pointed me to examples on commitment and consistency from Robert Cialdini’s book Influence: Science and Practice, such as a 1968 study conducted on people at the racetrack who became more confident about their horses’ chance of winning after placing their bets. Never mind how these researchers measured confidence—there are plenty of examples in the world of sales that support the same behavioral pattern.

“Once we make a choice or take a stand, we will [tend to] behave consistently with that commitment,” Cialdini writes. We want to feel justified in our decision. Back in college, when I studied International Relations, we called it “you stand where you sit”—the notion that an individual will adopt the politics and opinions of the office to which they are appointed.

So how does this apply to marketing? You need to examine all touch points between your company and your customers (or your audience), and make a deliberate effort to inject influence into these interactions. This doesn’t mean you should manipulate your customers—but it does mean that you shouldn’t miss an opportunity to remind them why you are the right choice. And if you’re taking a survey—remember that your questions can reshape the respondents’ behaviors.

P.S. From personal experience, do you think being asked a question has influenced your subsequent behavior? Please leave a comment below to share!

Related Resources

MarketingSherpa Landing Page Optimization Survey

Focus Groups Vs. Reality: Would you buy a product that doesn’t exist with pretend money you don’t have?

Marketing Research: Cold, hard cash versus focus groups

Marketing Research and Surveys: There are no secrets to online marketing success in this blog post

MarketingSherpa Members Library — Are Surveys Misleading? 7 Questions for Better Market Research

Marketing Research: Cold, hard cash versus focus groups

December 9th, 2010

“The best research is when individuals pull out their wallet and vote with cold, hard cash.” – my first boss

My first experience in marketing was working with a specialized publishing company. I had the privilege to work on exciting products with sexy topics such as “human resource compliance regulations.” Trust me when I tell you there is no better ice-breaker at a party than talking about a ground-breaking court ruling that will change how your company meets compliance of the Fair Labor Standards Act (FLSA).

As a publisher, we used direct-response marketing to drive sales, with an aggressive program of direct-mail, email and telemarketing. And when it came to new product development, we were big believers in research. From customer surveys to industry research to focus groups, we used it all to make the best possible decision. At least, that was the general assumption…

Out of focus

You always have to test because many research tactics just help you achieve a best guess. And while a best guess is often closer to the truth than a random guess, it’s sometimes widely off the mark. In fact, I learned a valuable lesson one day when our company performed a focus group.

The members of this particular focus group were subscribers of a paid newsletter, and we knew that each person had subscribed by responding to a specific direct mail piece. That mail piece was extremely effective, with a powerful but somewhat provocative subject line and letter. Many people loved that direct-mail piece, but many hated it, so we wanted to get the opinion of the focus group members. When we showed the group the direct-mail piece and asked them if they would respond to that piece, 40 percent said they would never respond (if they only knew what we knew). Wow, we were shocked!

So, should we conclude that those 40% were bold-faced liars? Not necessarily. What we can conclude is that what people say they will do and what they actually do may be totally different. That is why research is only part of the equation, but if you want to sleep well at night, you have to take the next step…

Voting with their wallets

At the end of the day, the best research was when we tested the product and let the customers in the marketplace determine with their wallet if it was a viable product. We would test critical elements, like book title and price, and very quickly we would know if we had a winner or not.

Yes, all of the surveys and research were necessary to get started, but the most critical research was in our testing program. Testing is an amazing research tool. Regardless of the conversion you are trying to achieve, when your prospect takes (or doesn’t take) an action, you a have a valuable piece of information. Your conversion goal may be an event ticket sale, a white paper download, an email newsletter signup, or hundreds of other possible actions, but one thing never changes – the action you are seeking to drive can be tracked.

And if you’re ready to measure when your prospect engages with you, that is when the learning begins.

So, I’m thankful for that boss early in my career telling me repeatedly that the best research is when individuals pull out their wallet and vote with cold-hard cash. Over the years, I’ve had many experiences when individuals tell me they are going to do something but until they actually do it, I’m a little skeptical. (Editor’s Note: It’s true. Todd told me he was going to write a blog post for quite awhile. Now, I believe it.)

So gather as much research as possible, but always remember that cold, hard cash is a pretty sweet piece of research.

Related resources

Are Surveys Misleading? 7 Questions for Better Market Research (Members Library)

Marketing Research and Surveys: There are no secrets to online marketing success in this blog post

Focus Groups Vs. Reality: Would you buy a product that doesn’t exist with pretend money you don’t have?

Never Pull Sofa Duty Again: Stop guessing what your audience wants and start asking

Ten Numbers Every Email Marketer Should Commit to Memory

November 2nd, 2010

Earlier this month I led my ninth full-day Email Marketing Essentials Workshop Training for MarketingSherpa. One of the most popular parts of the day is the on-the-spot critiques of email creative. They are gentle critiques – attendees send samples and as a group we talk about how they follow standards and best practices and what the marketer might test to improve performance. I’ve had people tell me that the critiques of their pieces alone were worth the cost of admission, but I digress.

Creative is nice, but quantitative data is where the rubber meets the road for me. So I often ask attendees about the performance of the creative we’re viewing. What’s your abandon rate on that email sign-up page? How was the click-through rate on this email? Why do you send this every month – does the ROI justify it?

And it surprises me how many marketers don’t know these numbers off the top of their heads.

Now, I admit it; I’ve always had a head for figures. When I check-in to a hotel my room number always imprints on my mind right away. Early in my career I was a fixture at upper level meetings, because my bosses knew that I’d be able to answer, off the top of my head, just about any quantitative query that arose.

But still…

In my perfect world, here are the ten numbers that every email marketer would have committed to memory (join us in Miami or Los Angeles and impress me!):

1. Percentage of New Website Visitors that Sign-up for Email

New, rather than returning, Website visitors are a key audience for email acquisition. Anyone who visits your Website and likes what they see is someone you want on your email list. That way you have the ability to begin a relationship with them via email – and sell them, over time, on your company and your offerings.

There’s no “magic number” for this type of conversion. But there’s also no reason that you shouldn’t always be trying to improve it. I’ve seen new visitor conversion rates from 0% to 25%. They key is to know what your conversion rate is and to always be looking for ways to improve it.

2. Abandon Rate on Your Online Sign-up Process

How many people that start your sign-up process bail out before they are finished? This is important because if you can figure out why they left and address the issue, you have an instant boost in your list growth.

There’s no golden rule for what your abandon rate should be, except the lower the better. I’ve seen abandon rates as high as 90%, meaning that 9 out of 10 people that thought they wanted an email relationship changed their mind during sign-up. If you’ve got 50% or more of the folks abandoning your sign-up process, you have plenty of room for improvement. And even if it’s only 25%, making changes to get it to 20% would provide a nice lift in list growth.

3. List Size

Everyone should know how many email addresses they have on their email list. This is a basic and most people do have this information close at hand, I’ll admit. But be sure you get monthly updates.

As with some of the other figures, there’s no perfect size for a list. I’d rather work with a client that has a small but responsive list than a large list that’s unresponsive. It’s usually more difficult to do any statistically significant testing with a list of less than 40,000 email addresses. The key here is to know the universe of your target audience and work to get as many of them as possible to sign-up for your email program.

4. Monthly List Growth Rate

How much is your list growing, or declining, each month? It’s an important question to be able to answer since it goes directly to the long-term viability of your email marketing program.

MarketingSherpa’s 2010 Email Marketing Benchmark Report reported that lists that were growing did so at an average rate of 19.2% every six months. Those with declining lists reported a 10.3% decline over the same six-month period.

If you do the math, that’s an average monthly list growth of 3.2% for those with lists that are increasing. If your list is growing more quickly than this, great – but you still want to be thinking about how you can boost acquisition performance.

5. Bounce Rate

Bounces shouldn’t be a large part of your email send – but you should know this figure cold. If it’s higher than the industry average, then you may have serious issues with your list, list management and/or with being blacklisted (since some, but not all blacklists return a bounce message).

The Epsilon Q2 2010 Email Trends and Benchmarks Study reported an average bounce rate of 5.2%. If yours is higher, it’s worth some analysis to figure out why.

6. Open Rate

What percentage of your email audience is opening your missives? Opening is the first step to action, so this figure directly impacts your overall campaign performance. Open rates are somewhat controversial, but if you look at them as a relative, rather than an absolute, metric, they are still very useful.

Open rates are running from 16.4% to 31.5%, depending on industry, according to the Epsilon Q2 2010 Email Trends and Benchmarks Study. The average across all industries is 22.1%. If your open rates are below this it gives you something to shoot for. If you’re exceeding these metrics, there’s still always room to do better.

7. Click-through Rate

Click-through rate, calculated by dividing unique clicks by the quantity assumed delivered (send quantity minus bounces), is an important figure. Don’t get it confused with your click-to-open rate (sometimes referred to as the “engagement rate”) – which is useful but very different.

The Epsilon Q2 2010 Email Trends and Benchmarks Study is reporting an average open rate of 5.3%; the range across industries is 3.2% to 10.1%. As with opens, you should always know where you are in relation to industry benchmarks – and always be striving to improve your performance.

8. Conversion Rate

There are many different types of conversion rates, but the one I’m speaking of here is related to the bottom line goal of your email. If you’re looking to generate leads, then it’s the number of leads you generated divided by the quantity of email messages assumed delivered (quantity sent minus bounces). If you’re going for direct sales, then substitute direct sales for leads generated. If your email has some other goal, like driving traffic to a brick and mortar location, then substitute that.

Conversion rates are all over the board. Much depends on what your goal is and how much effort and/or commitment is required from the respondent to achieve it. I’ve seen conversion rates as high as 40% (and this conversion required a purchase!) – and as low as 0%. As with many of the other metrics, there’s no hard and fast measure of success here – but you always want to do better.

9. Return on Investment

Return on investment is calculated as how much revenue you generate for each dollar you spend on your email program. Some email marketers I’ve met shy away from even trying to calculate ROI because either (a) they have no way to track revenue associated with email or (b) they don’t feel they have a clear way to delineate the costs of their email program. While it’s great to have an absolutely accurate ROI, even an estimate can be useful. As long as you use the same formula to calculate ROI send-over-send and month-over-month, you can get a relative read on whether it’s improving or not.

The Direct Marketing Association projected that email will return $42.08 for each dollar spent on it in 2010. That figure is somewhat controversial, many feel it’s too high, but remember that it’s not a measure of success or failure. If your email marketing is profitable, returning more than $1 for each dollar you spend, it’s a success. And if you are making more than $42 for each dollar you spend, then your goal should be $50.

10. Dollar Value of an Email Address

What is an email address worth to your company? This is another figure which few marketers know, but it’s critical to the success of your acquisition efforts. The easiest way to calculate it is to divide the overall revenue generated from your in-house email efforts by the average number of people on your house list. When you know how much revenue you can expect from an email address, then you know how much you can spend on acquisition.

There’s no right or wrong answer here, and no industry benchmarks which would be valuable to gauge against. But that shouldn’t stop you from generating a quantitative figure. And as with the other metrics, you should strive to be sure that the value of each email address goes up, not down, with time.

Editor’s Note: Jeanne Jennings is teaching MarketingSherpa’s Email Essentials Workshop Training in 12 locations across North America this year; the next one takes place in Miami on November 9th. She’ll be blogging about the course material and her experiences during the tour. We’re excited to have her on board and contributing to the blog.

Related resources

MarketingSherpa Email Awards 2011

MarketingSherpa Email Summit 2011

MarketingSherpa Email Marketing Essentials Workshop Training