Archive

Archive for the ‘Research And Measurement’ Category

New Technology Tracks the Eyepath of Website Visitors

June 14th, 2011
Share

A recently published research paper may prove to be of great interest to marketers. “No Clicks, No Problem: Using Cursor Movements to Understand and Improve Search,” by Jeff Huang, Information School University of Washington; and Ryen W. White and Susan Dumais of Microsoft Research, takes a look at the correlation of eyegaze on a webpage and cursor placement.

This research found a high correlation between where the cursor was placed on a page and where the user was actually looking, and created a tiny JavaScript capable of running invisibly on a webpage that tracks where the cursor is in real time providing information on where that webpage visitor is looking and, possibly more importantly, pausing throughout the visit.

This is from the abstract of the linked paper:

In this paper, we examine mouse cursor behavior on search engine results pages (SERPs), including not only clicks but also cursor movements and hovers over different page regions.

We: (i) report an eye-tracking study showing that cursor position is closely related to eye gaze, especially on SERPs; (ii) present a scalable approach to capture cursor movements, and an analysis of search result examination behavior evident in these large-scale cursor data; and (iii) describe two applications (estimating search result relevance and distinguishing good from bad abandonment) that demonstrate the value of capturing cursor data.

Maybe most intriguing for marketers is the final line of the abstract, “Our scalable cursor tracking method may also be useful in non-search settings.”

The JavaScript code that drives this online tracking tech is a mere 750 bytes and had a negligible effect on the load time of webpages hosting the script. Although this technology is not yet commercially available, it should eventually present another interesting avenue for marketers to test website visitor’s behavior.

Click to enlarge

Jeff Huang, the team member who implemented and deployed the cursor tracking code, mined the cursor data, and wrote parts of the paper, took a few moments to answer several questions I had about this intriguing technology.

Tell me a little more about this research.

Jeff Huang: We examined mouse cursor behavior on search engine results pages, including not only clicks but also cursor movements and hovers over different page regions. In an eye-tracking study, we showed that cursor position is closely related to eye gaze. We developed a scalable approach to capturing cursor movements, and an analysis of search result examination behavior for over 300,000 queries from around 22,000 people. Finally, we were able to use cursor movements to estimate search result relevance and distinguishing good from bad abandonment.

Marketers are probably familiar with click and heatmaps, and this seems closely related to that technology. Do you think this technology can be useful for more than search?

JH: Cursor movements can create heatmaps with a similar appeal as click maps. While click maps show only the regions being clicked, movement heatmaps can also show regions that received attention by proxy of the cursor position. Often clicks are not available for smaller sites so movements can provide richer information.

Could marketers utilize this tech for webpage research to improve, say, the eyepath of the page? Is cursor movement correlated with a page visitor’s eyepath?

JH: Yes, as we mention in the paper, the cursor is typically within 200 pixels of the eye gaze. The most common position for the cursor is to be slightly below what the user is looking at. We also found that the cursor follows the eye gaze by around 200ms [milliseconds] (although the data for this is highly variable).

Does this technology offer other applications for marketing efforts or webpage testing?

JH: Sure, having records of the cursor movements allow marketers to replay the user’s session on the Web page. For example, they can see which order a user filled out a form even if they did not complete the form and left the page instead. We have developed an efficient method to record the cursor movements so they take a minimal amount of space and can be collected without disrupting the user.

Is this tech publically available, and if not, when is commercial roll-out expected?

JH: This was developed as part of an internship at Microsoft Research last year, and deployed to internal users. I will return to Microsoft again this summer, but I cannot comment on its commercial availability.

Optimization Summit: Tests with poor results can improve your marketing

June 3rd, 2011
Share

Day one of 2011 Optimization Summit has come and gone. Many of us have vertigo, either from the amount of content we absorbed, or the view from the rotating restaurant atop the Westin, the tallest hotel in the Western Hemisphere.

Yesterday’s sessions were rich with insights from experts and marketers presenting their experiences in optimization. I Dr. Flint McGlaughlinsay “experiences” because, as we saw, not every test improves results. But every valid test offers valuable insights.

Dr. Flint McGlaughlin, CEO & Managing Director of MECLABS, addressed this point head-on in the day’s first session. McGlaughlin presented examples of landing pages tests that brought greater than 50% declines in response.

Was Dr. McGlaughlin feeling woozy? Did he sit in the rotating restaurant too long before his session? Actually, no. Dr. McGlaughlin illustrated that even tests with poor results can reveal valuable insights about an audience.

“The goal of a test is to get a learning, not a lift. With enough learnings, you can get the real lift,” he said.

Landing Page results two tests

The above image features the tests McGlaughlin touched on. If you’ve seen such results, then you’ve probably asked yourself “well, what do we do now?” Part of the answer came from Boris Grinkot, Associate Director of Product Development, MarketingSherpa, in a later session. Grinkot mentioned two typical reasons landing page visitors do not convert:

1. The page does not offer what visitors want

2. The page does not clearly explain that you have what visitors want (or why they want it from you)

These two causes can help identify the causes of poor landing page performance, and what you should test to improve results.

With this in mind, the researchers tested a final treatment that featured drastically shorter copy. The idea was to get out of the way — to clearly show visitors that the site had what they wanted and to make it easy to get.

Landing page treatment 3

This treatment increased conversion rates by 78%. Why?

The marketing channel driving traffic to the page had already done the selling, Dr. McGlaughlin said. The page did not have to convince  visitors to convert — they were ready to convert. The previous treatments were impeding them.

The results of the previous two tests helped the researchers form this hypothesis and create the third treatment. Even though the two tests had abysmal results, they gave the team enough insights to identify a better treatment that would generate a real lift in response. So even tests with poor performance can improve your marketing — they just might not have improved it yet.

Related resources

Optimization Summit 2011

Landing Page Optimization: 2 charts describing the best page elements to test and how to test them

Marketing Research Chart: Top website objectives to determine optimization priorities and tactics

Landing Page Optimization: Minimizing bounce rate with clarity

Optimization and A/B Testing: Why words matter (for more than just SEO)

Members Library – Online Marketing: Website redesign leads to 476% increase in page views and 64% lower bounce rate

Members Library – Campaign Analysis: Optimization expert lists 5 tweaks to boost an email campaign’s conversions

Landing Page Optimization: 2 charts describing the best page elements to test and how to test them

May 31st, 2011
Share

Optimization testing can be daunting. With so many elements on a Web page, and so many ways each could be customized, knowing what to test and how to change it can feel like testing spaghetti the old college way (throw it at the ceiling and see if it sticks).

But optimization does not have to be daunting or random. Some marketers will receive a crash course in landing page optimization at our Optimization Summit this week. If you can’t make it, don’t fret. There’s always next year. In the meantime, MarketingSherpa just published the 2011 Landing Page Optimization Benchmark Report.

I pulled two charts from the report to give marketers some reference points when designing their tests. Hopefully they will help keep crusty pasta off your ceiling.

Landing Page Optimization Chart Top page elements to test

This chart lists the four page elements that rank most consistently as having a “very significant impact” across three optimization objectives. Note that a different page element ranks highest for each objective:

  • Direct lead gen: The highest performing element is the form layout at 44 percent
  • Incentivized lead: The highest performing element is the body copy at 41 percent
  • Ecommerce: The highest performing element is the image content at 43 percent

The chart lists only four of 17 page elements measured by our analysts, so there are many other elements that can be impactful in your tests. Your results may not mimic this data exactly, but this chart points to elements that other marketers are seeing as having the most impact.

Landing Page Optimization Chart Top Segmentation and Relevance Tactics

Once you select a page element to test, the big question becomes “how do we change it?” This chart lists tactics you can use to segment your audience and add more relevance to your optimization pages. Each tactic is ranked by its effectiveness, ease of use, and usage rate among marketers.

The far right of the chart features the most effective tactics: segmenting based on purchase history and other CRM data. Customizing landing pages to a customer’s purchase history appears to be an opportunity for marketers. It is the most-effective tactic listed and appears relatively easy to implement.

In the report, our analysts also point to another opportunity: messaging in the referring ad or page.

“Using the messaging in the referring ad or page can be especially easy to apply when the marketer also controls that messaging, making it a highly efficient way to segment,” according to the report.

However you go about your optimization tests, it is important that you test accurately and continuously learn from the results. The data in these charts can provide reference points to guide your plans, but only your team can uncover the best tactics to fit your audience and your brand.

Related resources

Optimization Summit 2011

2011 Landing Page Optimization Benchmark Report

Marketing Research Chart: Top website objectives to determine optimization priorities and tactics

Landing Page Optimization: Minimizing bounce rate with clarity

Optimization and A/B Testing: Why words matter (for more than just SEO)

Members Library — Campaign Analysis: Optimization expert lists 5 tweaks to boost an email campaign’s conversions

Members Library — Landing Page Optimization: How to serve 2 markets with 1 page

Members Library — How to Plan Landing Page Tests: 6 Steps to Guide Your Process

Digital Marketing: How to measure ROI from your agencies

May 17th, 2011
Share

agency watch dogToday’s marketing world is incredibly complex. The growth of digital has dramatically expanded the number of channels and customer touch points that require marketing attention, and it isn’t just a question of number. Digital channels often involve unique skills, unique technology and unique culture. Combining SEO expertise with great digital creative plus Facebook smarts and traditional media buying isn’t difficult, it’s pretty much impossible.

Inevitably, you’re faced with a world where you need to rely upon, direct, manage and motivate multiple agency partners. To do that – and to understand how to allocate resources between channels, how to decide if an agency is giving you all they can, and how to choose where to invest your time and resources – takes sophisticated measurement. You can’t manage what you don’t measure – this statement is as true for your agency relationships as it is for your marketing dollars.

In a world where there are lies, damn lies, and statistics, why would you let your agencies measure their own performance? If your agencies are siloed, they have every incentive (and ability) to make their channel look maximally successful. If you’ve concentrated everything in a single agency, that agency has every incentive (and ability) to make their entire program look successful and not delve too deeply into any single piece.

In today’s environment, measurement is just too important to leave to the wolves.

Intra-Agency Measurement suffers from four BIG problems:

  • Skill Set: For most agencies, measurement is just grafted onto a creative culture. It isn’t their business, core expertise or focus and isn’t what makes them money.
  • Bias: It doesn’t take evil intent to create bias. One of the great challenges of measurement is the temptation to always pass on good news. When the analyst has a self-interested stake in the measurement, this problem is that much worse.
  • Siloed View of the World: Even the best measurement an agency can provide is typically limited to their world and their tools. They see only their slice of the pie – meaning that cannibalization, cross-channel, and customer issues are invisible to them.
  • Standardization: Every industry has evolved its own way of talking about measurement and they are all different. Nobody agrees what engagement means or how ROI metrics should be applied to them. Vendors have reports and technology that are narrowly adapted to their own language and techniques and cannot be standardized.

What’s the right solution?

You need a “Digital Watchdog” – either an analytics agency of record or an internal employee or department tasked with making sure that every channel you use has the right measurement, the right standards, and the right level of resources and attention.

A Digital Watchdog should be focused explicitly on measurement, measurement tools and measurement skills. That guarantees you a culture based on measurement and an appropriate skill-set to solve your measurement challenges. A Digital Watchdog should have NO vested interested in your spend. They should not manage ANY media budget or have any stake in which channels you invest in or use.

That’s what you should expect of a Digital Watchdog. Here’s what they should expect of you.

A Digital Watchdog needs to be given a cross-channel view of your customers and measurement. They need to see and have access to all your marketing spend and agency reporting. A Digital Watchdog needs to be able to create or collaborate on the creation of a comprehensive view of measurement standardization. As long as you allow each channel to measure itself its own way, you can’t expect ANYONE to make sense of the whole picture.

There are some key steps when it comes to getting started with a Digital Watchdog. Usually, you’ll start with review of the measurement in-place for each channel – is it complete, accurate, and robust? Having the basic measurement infrastructure in place (and knowing it’s right) is essential.

The second step is typically the creation of a standardized measurement framework (based on segmentation) that can be applied to every channel. Useful measurement begins with audience segmentation and drives across your business naturally – not by forcing your business into artificial measurement constructs.

Once you’ve got a good framework in place, it’s time to execute both Media-Mix and Attribution Modeling to understand spending interactions and optimization. Media-Mix Modeling is your best tool for deciding how moving the levers of marketing spend by channel drive total business results. Attribution Modeling helps you understand how channels work in harmony (or at cross-purposes) when it comes to acquisition, engagement and conversion.

At the same time, you’ll want to identify the holes and gaps where your agency measurement isn’t adequate, where their performance is sub-optimal, or where you’re not getting the attention you deserve. Your Digital Watchdog should drive channel-specific optimizations for “problem” agencies and help you evaluate how to get more from your relationships.

With the dollars at stake in today’s marketing world, there’s just too much at stake to count on your agencies doing the right thing with measurement. They are in the wrong place, with the wrong tools, the wrong motives and the wrong skill sets to do the job right.

Bob Heyman is a keynote speaker at Optimization Summit 2011, and all attendees will receive a copy of his book, “Marketing by the Numbers: How to Measure and Improve the ROI of Any Campaign,” provided by HubSpot.

Gary Angel, President and CTO, Semphonic, contributed heavily to this blog post as well.

Related Resources:

Optimization Summit 2011

Marketing Strategies: Is performance-based vendor pricing the best value?

New Chart: What Social Metrics are Organizations Monitoring and Measuring?

Maximize your Agency ROI

Photo attribution: Randy Robertson

Social Media Measurement: Moving forward with the data and tools at hand

April 29th, 2011
Share

Social media measurement is in its early phases, and marketers need to decide whether to parse the social media cacophony, much like a radio astronomer, gathering as much data as possible to discern the signs of life or selectively focus on a small, but sufficiently meaningful set of metrics.

The word “sufficient” can span a wide spectrum, and determining what is sufficient is perhaps the question that marketers must answer.

In some sense, you really don’t have a choice. How much data you can afford to collect and analyze is limited by your organization’s budgetary and human resources. If you are not already collecting enough data for “big” analytics (”Approach 1” that I described in my last blog post), it makes sense to get the most out of what you have now relatively quickly, and in the process learn what additional data you need.

I spend a significant amount of time in digital photography, and my friends often ask me for advice on what camera to buy as they are getting more “serious.” My answer is always the same—first, get the most out of the camera you have. Once you start appreciating what your camera lacks, then you can start thinking about investing into those specific features.

In the same sense, getting started is critical. Reading blog posts will not give you a concrete sense of social media (SoMe) measurement until you get your own hands on a monitoring tool—even if you start by  manually listening to conversations using RSS feeds, Twitter, Google Alerts, and the like.

Second, you need to clearly identify your objectives. In our own research project on SoMe measurement with Radian6, I am leaning toward focusing on best practices for specific scenarios—e.g., a Facebook company page—to deal with manageable amounts of data and produce results on a realistic timeline.

So for those not quite ready for “big” analytics, let’s take a look at a quick start approach…

Approach 2: A microscope, not a radio telescope

Commit to a set of metrics you’ll be accountable for, and stick with them. This is a far more pragmatic approach that does not require that every kind of data is available to be measured. If it appears that this approach is not scientific, that is not the case. While focusing on a smaller number of metrics does not paint the whole picture the way that the first approach does, trending data over time can be highly valuable and meaningful in reflecting the effectiveness of marketing efforts.

Taking into account the marginal time, effort, and talent required to process more data, it makes economic sense to focus on a smaller number of data points. With fewer numbers to crunch, marketers armed, for example, only with data available directly from their social media management tools, can calibrate their marketing efforts against this data to build actionable KPIs (key performance indicators).

During Social Media Week, NYC-based Social2b’s Alex Romanovich, CMO, and Ytzik Aranov, COO, presented a straightforward measurement strategy rooted in established, if not venerated, marketing heuristics, such as Michael Porter’s Value Chain Analysis. Their core message is to appreciate that different social media KPIs will be important not only to different companies and industry segments, but “these KPIs also have to align well with more traditional metrics for that business – something that the C-Level and the financial community of this company will clearly understand.

Alex stresses that “the entire ‘value chain’ of the enterprise can be affected by these metrics and KPIs – hence, if the organization has a sales culture and is highly client-centric, the entire organization may have to adapt the KPIs used by the sales organization, and translated back to the financial indicators and cause factors.

This approach should immediately make sense to marketers, even without any knowledge of statistical analysis.

Social2B focuses not only on the marketing, but also on the customer service component of SoMe ROI, and here is Ytzik’s short list of steps for getting there:

  1. Define the social media campaign for customer service resolution
  2. Solve for the KPI and projections
  3. Apply Enterprise Scorecard parameters, categories
  4. Solve for risk, enterprise cost, growth, etc.
  5. Map to social media campaign cost
  6. Solve for reduction in enterprise costs through social media
  7. Justify and allocate budget to social media

An important element here is the Enterprise Scorecard—another established (though loosely defined) management tool that is often overlooked even by large-scale marketing organizations. Given the novelty of SoMe, getting it into the company budget requires not only proving the ROI numerically, but also speaking the right language. Ytzik’s “C-level Suite Roadmap” might appear simple, but it requires that corporate marketers study up on their notes from business school:

  • Engage in Compass Management (managing and influencing your organization vertically and horizontally in all directions)
  • Define who owns the Web and social media within the company
  • Identify the enterprise’s value chain components
  • Understand the enterprise’s financial scorecard

Again, no statistics here—it is understood that analysis will be required, but these tools will put you in a good position when the time comes to present your figures.

How to get started

Finally, I wanted to get as pragmatic as possible to help marketers get started and not get stuck in a data deluge. Here are Social2B’s top 10 questions to ask yourself before you scale your SoMe programs:

  1. Is my organization and my executive management team ready for social media marketing and branding?
  2. Does everyone treat social media as a strategic effort or as an offshoot of marketing or PR/communications?
  3. Where in the organization will social media reside?
  4. Will I be able to allocate sufficient budget to social media efforts in our company?
  5. How will social media discipline be aligned with HR, Technology, Customer Service, Sales, etc.?
  6. What tools and technologies will I need to implement social media campaigns?
  7. Will ‘social’ also include ‘mobile’?
  8. How will we integrated SoMe marketing campaigns with existing, more ‘traditional’ marketing efforts?
  9. How much organizational training will we need to implement in integrating ‘social’ within our enterprise?
  10. Are we going to use ‘social’ for advertising and PR/Communications? What about ‘disaster recovery’ and ‘reputation management’?

Related Resources

Social Media Measurement: Big data is within reach

2011 Social Marketing Benchmark Report – Save $100 with presale offer (ends tomorrow, April 30)

Always Integrate Social Marketing?

Inbound Marketing newsletter – Free Case Studies and How To Articles from MarketingSherpa’s reporters

Social Media Measurement: Big data is within reach

April 28th, 2011
Share

Should marketers wait for a grand unified theory of social media ROI measurement, or confidently move forward with what they have available to them now?

This question has been at the forefront of my thinking, as we proceed with MarketingSherpa’s joint research project with Radian6 to discover a set of transferable principles, if not a uniform formula to measure social media (SoMe, pronounced “so me!”) marketing effectiveness.

As I have written previously, some of the popular measurement guidelines provide a degree of comfort that comes from having numbers (as opposed to just words and PowerPoint® slides), but fail to connect the marketing activity to bottom-line outcomes.

To help think through this, I spoke with several practitioners to get some feedback “from the trenches” during SoMe Week here in NYC. With their help, I broadly defined two approaches.

Approach 1: Brave the big data

Take large volumes of diverse data, from both digital and traditional media, and look for correlations using “real” big-data analysis. This analysis is performed on a case-by-case basis, and the overarching principles are the well-established general statistical methods, not necessarily specifically designed for marketers.

Pros

  • The methodologies are well established
  • There are already tools to help (Radian 6, Alterian, Vocus, etc)

Cons

  • Most marketers are not also statisticians or have the requisite tools (e.g., SAS is an excellent software, but it comes with a premium price)
  • Comprehensive data must be available across all relevant channels, otherwise the validity of any conclusions from the data rapidly evaporates (Radian6 announcement of integrating third-party data streams like Klout, OpenAmplify and OpenCalais in addition to existing integration with customer relationship management (CRM), Web analytics, and other enterprise systems certainly helps)
  • In the end, it’s still conversation and not conversion without attribution of transactional data

If the volume of data becomes overwhelming, analytical consulting companies can help. NYC-based Converseon does precisely that, and I asked Mark Kovscek, their SVP of enterprise analytics, about the biggest challenges to getting large projects like this completed efficiently. Mark provided several concrete considerations to help marketers think through this, based on Converseon’s objectives-based approach that creates meaningful marketing action, measures performance, and optimizes results:

  • Marketers must start with a clear articulation of measurable and action-oriented business objectives (at multiple levels, e.g., brand, initiative, campaign), which can be quantified using 3-5 KPIs (e.g., Awareness, Intent, Loyalty)
  • Large volumes of data need to be expressed in the form of simple attributes (e.g., metrics, scores, indices), which reflect important dimensions such as delivery and response and can be analyzed through many dimensions such as consumer segments, ad content and time
  • The key to delivering actionable insights out of large volumes of data is to connect and reconcile the data with the metrics, with the KPIs, and with the business

How much data is enough? The answer depends on the level of confidence required.  Mark offered several concrete rules of thumb for “best-case scenario” when dealing with large volumes of data:

  • Assessing the relationship of data over time (e.g., time series analysis) requires two years of data (three preferred) to accurately understand seasonality and trend

–   You can certainly use much less to understand basic correlations and relationships.  Converseon has created value with 3-6 months of data in assessing basic relationships and making actionable (and valuable) decisions

  • Reporting the relationship at a point in time requires 100-300 records within the designated time period (e.g., for monthly listening reporting, Converseon looks for 300 records per month to report on mentions and sentiment)

–   This is reasonably easy when dealing with Facebook data and reporting on Likes or Impressions

–   However, when dealing with data in the open social graph to assess a brand, topic or consumer group, you can literally process and score millions of records (e.g., tweets, blogs, or comments) to identify the analytic sample to match your target customer profile

  • Assessing the relationship at a point in time (e.g., predictive models) requires 500-1000 records within the designated time period

Understanding the theoretical aspects of measurement and analysis, of course, is not enough. A culture of measurement-based decision making must exist in the organization, which means designing operations to support this culture. How long does it take to produce a meaningful insight? Several more ideas from Converseon:

  • 80% of the work is usually found in data preparation (compiling, aggregating, cleaning, and managing)
  • Reports that assess relationships at a single point in time can be developed in 2-3 weeks
  • Most predictive models can be developed in 4-6 weeks
  • Assessing in-market results and improving solution performance is a function of campaign timing

Finally, I wanted to know what marketers can do to make this more feasible and affordable. Mark recommends:

  • Clearly articulate business objectives and KPIs and only measure what matters
  • Prioritize data
  • Rationalize tools (eliminate redundancy, look for the 80% solution)
  • Get buy-in from stakeholders early and often

In my next blog post on this topic, I’ll discuss an approach to SoMe measurement that trades some of the precision and depth for realistic attainability—something that most marketers that can’t afford the expense or the time (both to learn and to do) required to take on “big data.”

Related Resources

Social Media Marketing: Tactics ranked by effectiveness, difficultly and usage

Always Integrate Social Marketing?

Inbound Marketing newsletter – Free Case Studies and How To Articles from MarketingSherpa’s reporters

Social Marketing ROAD Map Handbook

Social Media Marketing: Tactics ranked by effectiveness, difficulty and usage

April 26th, 2011
Share

I’ve been browsing the new MarketingSherpa 2011 Social Marketing Benchmark Report this week and soaking up the rich data. One of the first charts that struck me is a bubble chart on social marketing tactics.Social Marketing Tactics Chart 2011

First, I want to say, I love these bubble charts. They provide a three-dimensional view of the data on a given topic. Our researchers do a great job of packing them full of information without making them confusing.

This chart graphs the effectiveness, difficulty and popularity of each social media marketing tactic. You’ll notice a clear positive correlation between a tactics’ level of difficulty and its level of effectiveness.

Hard work pays off

For those of you who have not brushed up on your statistics lately (as I just brushed up a moment ago) I will note that a positive correlation between two factors means that as one factor increases, the second factor increases. For example, there is a positive correlation between my consumption of ice cream and the temperature outside.

Looking at this chart, it’s clear that the most effective social marketing tactics are also the most difficult, and vice-versa. Blogger relations — the most effective tactic reported — is also the only tactic to break into the 70%-range in terms of marketers reporting it as “very” or “somewhat” difficult.

You’ll also see that the three most-effective tactics — blogging, SEO for social sites, and blogger relations — are known to require significant amounts of time and effort before results are shown.

Every tactic is somewhat effective

Take a look at the scale on this chart’s Y-axis (level of effectiveness). Those listed percentages correspond to the number of marketers who reported a tactic as “very” effective. What they do not include are the marketers who reported a tactic as “somewhat effective.”

Looking at the chart, you might guess that adding social sharing buttons to emails is a waste of time — but don’t be too quick to write this tactic off completely. Only 10% of social marketers reported it as “very effective,” but 55% rated it as “somewhat effective” (found deeper in the report). With a total of 65% of social marketers reporting at least some effectiveness, these buttons might be worth the small investment they require.

Also, since adding social sharing buttons bottoms-out the Y-axis here, every other tactic listed has more than 65% of social marketers reporting at least some effectiveness. Here are some examples:

  • Social sharing buttons on websites: 69% say at least “somewhat” effective
  • Advertising on social sites: 73%
  • Microblogging: 75%

Related resources:

MarketingSherpa 2011 Social Marketing Benchmark Report

Free Webinar: Best Practices for Improving Search and Social Marketing Integration

Marketing Research Chart: Using social media as a list-growth tactic

Inbound Marketing newsletter – Free Case Studies and How To Articles from MarketingSherpa’s reporters

Marketing Research: How asking your customers can mislead you

February 25th, 2011
Share

In a recent blog post for our sister company MarketingExperiments, I shared my experiences at the fifth Design for Conversion Conference (DfC) in New York City. Today, I want to focus on a topic from Dr. Dan Goldstein’s presentation, and its relevance to usability and product testing for marketers — how focus group studies can effectively misrepresent true consumer preferences.

Asking you for your input on our Landing Page Optimization survey for the 2011 Benchmark Report has firmly planted the topic of surveys at the forefront of my thinking.

Calibration is not the whole story

The need to calibrate focus group data is well recognized by marketers and social scientists alike. The things marketers want to know the most – such as “intent to purchase” – is more obviously susceptible to misleading results. It’s easy to imagine that when people are asked what they would do with their money in a hypothetical situation (especially when the product itself is not yet available), naturally their answers are not always going to represent actual behavior when they do face the opportunity to buy.

However, mere calibration (which is a difficult task, requiring past studies on similar customer segments, where you can compare survey responses to real behavior) is not enough to consider. How we ask the question can influence not only the answer, but also the subsequent behavior, about which the respondent is surveyed.

Dr. Goldstein pointed me to an article in Psychology Today by Art Markman, about research into how “asking kids whether they plan to use drugs in the near future might make them more likely to use drugs in the near future.” Markman recommends that parents must pay attention to when such surveys are taken, and make sure that they talk to their children both before and after to ensure that the “question-behavior effect” does not make them more likely to engage in the behaviors highlighted in the surveys. The assumption is that if the respondent is aware of the question-behavior effect, the effect is less likely to work.

Question-Behavior Effect: The bad

If your marketing survey is focused on features that your product or service does not have—whether your competitors do or do not—then asking these negative questions may predispose your respondents against your product, without them even being aware of the suggestion. This is especially worrisome when you survey existing or past customers, or your prospects, about product improvements. Since you will be pointing out to them things that are wrong or missing, you run a good chance of decreasing their lifetime value (or lead quality, as the case may be).

Perhaps the survey taker should spend a little extra time explaining the question-behavior effect to the respondent before the interaction ends, also making sure that they discuss the product’s advantages and successes at the end of the survey. In short, end on a positive.

Question-Behavior Effect: The good

However, there is also a unique opportunity offered by the question-behavior effect: by asking the right questions, you can also elicit the behavior you want. This means being able to turn any touch point—especially an interactive one like a customer service call—into an influence opportunity.

I use the word “influence” intentionally. Dr. Goldstein pointed me to examples on commitment and consistency from Robert Cialdini’s book Influence: Science and Practice, such as a 1968 study conducted on people at the racetrack who became more confident about their horses’ chance of winning after placing their bets. Never mind how these researchers measured confidence—there are plenty of examples in the world of sales that support the same behavioral pattern.

“Once we make a choice or take a stand, we will [tend to] behave consistently with that commitment,” Cialdini writes. We want to feel justified in our decision. Back in college, when I studied International Relations, we called it “you stand where you sit”—the notion that an individual will adopt the politics and opinions of the office to which they are appointed.

So how does this apply to marketing? You need to examine all touch points between your company and your customers (or your audience), and make a deliberate effort to inject influence into these interactions. This doesn’t mean you should manipulate your customers—but it does mean that you shouldn’t miss an opportunity to remind them why you are the right choice. And if you’re taking a survey—remember that your questions can reshape the respondents’ behaviors.

P.S. From personal experience, do you think being asked a question has influenced your subsequent behavior? Please leave a comment below to share!

Related Resources

MarketingSherpa Landing Page Optimization Survey

Focus Groups Vs. Reality: Would you buy a product that doesn’t exist with pretend money you don’t have?

Marketing Research: Cold, hard cash versus focus groups

Marketing Research and Surveys: There are no secrets to online marketing success in this blog post

MarketingSherpa Members Library — Are Surveys Misleading? 7 Questions for Better Market Research

Email Marketing: Show me the ROI

February 3rd, 2011
Share

After squinting at my screen for weeks trying to read the MarketingSherpa 2011 Email Marketing Benchmark Report PDF, I finally have a hard copy sitting on my desk — and it’s bursting with insight.

Having read the executive summary weeks earlier, I flipped through the chapters today and was struck by this stat:

Does your organization have a method for quantifying ROI from email marketing?

  • No: 59%
  • Yes: 41%

Email marketing can be amazingly efficient. B2C marketers report an average 256% ROI from the channel — pulling in $2.56 for every $1 invested — as mentioned later in the report.

What shocks me is that 59% of email marketers have not gauged their program’s efficiency. This means their company executives are likely unaware of the amazing job they’re doing. Even if executives have seen the clickthrough and conversion rates, they’re likely thinking about that line from Jerry Maguire.

Show me the moneyShow me the money

At last week’s Email Marketing Summit, Jeanne Jennings, Independent Consultant and MarketingSherpa Trainer, shot holes in many of the excuses she’s heard for why companies can’t calculate email’s ROI.

Here are three she highlighted:

  1. Our Web analytics software doesn’t provide this information
  2. We can’t track online sales back to email
  3. We don’t have an exact figure for costs

Taking these one at a time, Jennings noted that 1) most analytics solutions can provide the information. Google Analytics does and it’s free. 2) Setting up the tracking is simple. 3) You don’t need exact figures.

“As long as you can compare in an apples-to-apples fashion, that’s enough to get started,” Jennings said.

Judging performance by clickthrough and conversion rates is not enough — you should know the revenue generated, both on a campaign-level and a broader program-level.

Two simple calculations Jennings suggested:

  • Return on investment: Net revenue / cost
  • Revenue per email sent: Net revenue / # of emails sent

On a campaign-level, these metrics will reveal which campaigns pull in more money — not just more clicks. For your overall program, they quickly convey the importance of your work.

Also: The movers and shakers in your company are going to be much more impressed with figures that include dollar signs.

Show email’s potential

Another way to convince executives of email’s power is to point to success at other companies. Also at the Email Summit last week, Jeff Rohrs, VP, Marketing, ExactTarget, mentioned Groupon as a great example that email marketers could rally around.

Forbes recently dubbed the localized deal-of the-day website the fastest growing company ever, and its success is largely due to great email marketing.

The Wall Street Journal mentioned Groupon’s 50 million email subscribers as a competitive advantage and that some analysts estimate its value at $15 billion.

The executives will care

Once you can clearly attribute revenue and ROI to email, you might be surprised at how much attention you attract from company leaders.

At the Email Summit, Philippe Dore, Senior Director, Digital Marketing, ATP World Tour, presented his team’s email strategy to sell tickets to professional tennis events. A single email drove over $1 million in revenue, and several others brought in over $100,000 each.

The overall email campaign generated about $1.5 million in total. Suddenly, ATP’s executives were interested.

“We have our CMO talking about email marketing and subject lines,” Dore said.

Related resources

Email Marketing Summit 2011: 7 Takeaways to improve results

Email Marketing Awards 2011 Winners Gallery: Top campaigns and best results

Live Optimization with Dr. Flint McGlaughlin at Email Summit 2011

MarketingSherpa 2011 Email Marketing Benchmark Report

MarketingSherpa Email Essentials Workshop Training with Jeanne Jennings

Photo by: SqueakyMarmot

Marketing Research: Cold, hard cash versus focus groups

December 9th, 2010
Share

“The best research is when individuals pull out their wallet and vote with cold, hard cash.” – my first boss

My first experience in marketing was working with a specialized publishing company. I had the privilege to work on exciting products with sexy topics such as “human resource compliance regulations.” Trust me when I tell you there is no better ice-breaker at a party than talking about a ground-breaking court ruling that will change how your company meets compliance of the Fair Labor Standards Act (FLSA).

As a publisher, we used direct-response marketing to drive sales, with an aggressive program of direct-mail, email and telemarketing. And when it came to new product development, we were big believers in research. From customer surveys to industry research to focus groups, we used it all to make the best possible decision. At least, that was the general assumption…

Out of focus

You always have to test because many research tactics just help you achieve a best guess. And while a best guess is often closer to the truth than a random guess, it’s sometimes widely off the mark. In fact, I learned a valuable lesson one day when our company performed a focus group.

The members of this particular focus group were subscribers of a paid newsletter, and we knew that each person had subscribed by responding to a specific direct mail piece. That mail piece was extremely effective, with a powerful but somewhat provocative subject line and letter. Many people loved that direct-mail piece, but many hated it, so we wanted to get the opinion of the focus group members. When we showed the group the direct-mail piece and asked them if they would respond to that piece, 40 percent said they would never respond (if they only knew what we knew). Wow, we were shocked!

So, should we conclude that those 40% were bold-faced liars? Not necessarily. What we can conclude is that what people say they will do and what they actually do may be totally different. That is why research is only part of the equation, but if you want to sleep well at night, you have to take the next step…

Voting with their wallets

At the end of the day, the best research was when we tested the product and let the customers in the marketplace determine with their wallet if it was a viable product. We would test critical elements, like book title and price, and very quickly we would know if we had a winner or not.

Yes, all of the surveys and research were necessary to get started, but the most critical research was in our testing program. Testing is an amazing research tool. Regardless of the conversion you are trying to achieve, when your prospect takes (or doesn’t take) an action, you a have a valuable piece of information. Your conversion goal may be an event ticket sale, a white paper download, an email newsletter signup, or hundreds of other possible actions, but one thing never changes – the action you are seeking to drive can be tracked.

And if you’re ready to measure when your prospect engages with you, that is when the learning begins.

So, I’m thankful for that boss early in my career telling me repeatedly that the best research is when individuals pull out their wallet and vote with cold-hard cash. Over the years, I’ve had many experiences when individuals tell me they are going to do something but until they actually do it, I’m a little skeptical. (Editor’s Note: It’s true. Todd told me he was going to write a blog post for quite awhile. Now, I believe it.)

So gather as much research as possible, but always remember that cold, hard cash is a pretty sweet piece of research.

Related resources

Are Surveys Misleading? 7 Questions for Better Market Research (Members Library)

Marketing Research and Surveys: There are no secrets to online marketing success in this blog post

Focus Groups Vs. Reality: Would you buy a product that doesn’t exist with pretend money you don’t have?

Never Pull Sofa Duty Again: Stop guessing what your audience wants and start asking