Archive

Posts Tagged ‘measurement’

Test Planning: Create a universal test planner in 3 simple steps

May 2nd, 2013 1 comment

One of my responsibilities as a Research Analyst is to manage ongoing test planning with our Research Partners and at times, keeping tests running smoothly can be a challenge.

This is especially true when you consider testing is not a static event – it’s more like a living, breathing continuous cycle of motion.

But even with so many moving parts, effectively managing test plans can be made a little easier with two proven key factors for success – planning and preparation.

Today’s MarketingSherpa blog post is three tips for test planning management. Our goal is to give marketers a few simple best practices to help keep their testing queue in good order.

 

Step #1. Create

Creating a universal test planner everyone on your team can access is a great place to start.

For our research team, we created a universal test planner including:

  • Results from prior testing with our Research Partner
  • Current active tests
  • Any future testing planned
  • A list of test statuses definitions that everyone on the team understands – (test active, test complete, inconclusive, etc.)
  • A brief description of what is being tested (call-to-action button test, value copy test, etc.)
  • A list of who is responsible for each task in the test plan

 

Step#2. Organize

As I mentioned in the previous step, the status of a test can change and, based on the results, so will the ideas and priorities for future testing.

Some tests will move forward in the queue, and others will be pushed back to a later time.

So, to help keep our team informed of changes in the testing environment, we update the planner throughout the day and in real time during brainstorming sessions based on results and Partner feedback.

This allows us to focus our research and testing strategy efforts on expanding on discoveries versus chasing our tails to keep up-to-date.

Read more…

Marketing Analytics: Managing through measurement and marketing as revenue center

April 26th, 2013 No comments

“What gets measured is what gets done.” So says the old business maxim, at least.

We wanted to know what marketers get done, so to speak, so in the 2013 Marketing Analytics Benchmark Report, we asked…

Q: Which of the following are you involved with tracking, analyzing or reporting on for your organization?

 

We asked the MarketingSherpa community about these results, and here’s what they had to say …

 

Managing through measurement

These results highlight the indifference, or perhaps lack of experience, when it comes to tracking marketing, especially social media marketing.

As these channels can be tracked offline (via call tracking) and online, via dynamic numbers and email tracking, it still seems as though there are trackers and non-trackers in terms of marketing specialists.

Even with a nudge effect of marketing across several channels, the ROI of these nudges is important and should be tracked.

The old adage of “managing through measurement” is still important and not having accurate measurement to call upon leaves marketing specialists arguing based on their opinions rather than facts. (And, that’s a sure way to the exit door).

– Boyd Butler, Consultant

Read more…

Marketing Metrics: Aligning ROI goals across the enterprise

May 26th, 2011 No comments

More than 80 percent of CMOs were dissatisfied with their ability to measure marketing ROI and less than 20 percent said their company employed meaningful metrics according to a CMO council study quoted in Harvard Business Review.

The same article cited Copernicus Marketing Research findings that noted that most acquisition efforts fail to break even, no more than ten percent of new products succeed, most sales promotions are unprofitable, and advertising ROI is below four percent.

There is no absence of metrics or measurement tools. The problem is less one of analytics than of lack of alignment across the enterprise as to ROI goals.

But how can that alignment be attained?

There is a need for a common vocabulary and shared buy-in as to key performance indicators (KPIs). It is commonly assumed that all can be resolved if the VP of Sales and the VP of Marketing just go off and have a beer together. This rarely works. A better way to achieve alignment is to borrow from the toolbox of strategic planning and to use scenarios.

Scenario planning is a discipline popularized by Royal Dutch Shell in the ‘80s that has become a standard tool of strategic planning professionals. It is the process in which managers invent and consider the implications of alternate assumptions and futures. As a team-building exercise, it can remove the barriers that office politics, turf wars, and loyalty to current vendors bring to the effort to align goals and assumptions.

As consultant Juergen Daum has written, “The purpose of scenario planning is to help managers to change their view of reality, to match it up more closely with reality as it is, and reality as it is going to be. The end result, however, is not an accurate picture of tomorrow, but better decisions about the future.”

Scenario planning session

A Scenario planning session can be done over a one- or two-day off-site:

  • Start by modeling a scenario in which the current ROI goals and benchmarks are accurate and lead to a positive future. This is the “rosy scenario” that is implicitly guiding current thinking.
  • The team can then turn, in a politically non-threatening way, to alternate scenarios – those in which current goals and assumptions can be challenged. This process surfaces doubts and uncertainties while clarifying disconnects among the team as to definitions and priorities. Whatever the outcome, the very process builds agreement and understanding.

This process allows managers to confront, without defensiveness, the essential question:

“What if our current assumptions and procedures are wrong?”

Contemplating a scenario without a “rosy” outcome forces participants to both question current practices and to work together to forecast outcomes. The implications of using “wrong” ROI goals can be discussed collaboratively, fostering collaboration and understanding. Ideally, new metrics can be identified and outmoded ones discarded. Inevitably, participants emerge with greater understanding of their goals, of their key performance indicators, and of each other.

Bob Heyman is a keynote speaker at Optimization Summit 2011, and all attendees will receive a copy of his book, “Marketing by the Numbers: How to Measure and Improve the ROI of Any Campaign,” provided by HubSpot.

Gary Angel, President and CTO, Semphonic, contributed heavily to this blog post as well.

Related Resources

Digital Marketing: How to measure ROI from your agencies

Lead Marketing: Cost-per-lead and lead nurturing ROI

New Chart: What Social Metrics are Organizations Monitoring and Measuring?

Maximize your Agency ROI

Categories: Marketing Tags: , , , ,

Digital Marketing: How to measure ROI from your agencies

May 17th, 2011 7 comments

agency watch dogToday’s marketing world is incredibly complex. The growth of digital has dramatically expanded the number of channels and customer touch points that require marketing attention, and it isn’t just a question of number. Digital channels often involve unique skills, unique technology and unique culture. Combining SEO expertise with great digital creative plus Facebook smarts and traditional media buying isn’t difficult, it’s pretty much impossible.

Inevitably, you’re faced with a world where you need to rely upon, direct, manage and motivate multiple agency partners. To do that – and to understand how to allocate resources between channels, how to decide if an agency is giving you all they can, and how to choose where to invest your time and resources – takes sophisticated measurement. You can’t manage what you don’t measure – this statement is as true for your agency relationships as it is for your marketing dollars.

In a world where there are lies, damn lies, and statistics, why would you let your agencies measure their own performance? If your agencies are siloed, they have every incentive (and ability) to make their channel look maximally successful. If you’ve concentrated everything in a single agency, that agency has every incentive (and ability) to make their entire program look successful and not delve too deeply into any single piece.

In today’s environment, measurement is just too important to leave to the wolves.

Intra-Agency Measurement suffers from four BIG problems:

  • Skill Set: For most agencies, measurement is just grafted onto a creative culture. It isn’t their business, core expertise or focus and isn’t what makes them money.
  • Bias: It doesn’t take evil intent to create bias. One of the great challenges of measurement is the temptation to always pass on good news. When the analyst has a self-interested stake in the measurement, this problem is that much worse.
  • Siloed View of the World: Even the best measurement an agency can provide is typically limited to their world and their tools. They see only their slice of the pie – meaning that cannibalization, cross-channel, and customer issues are invisible to them.
  • Standardization: Every industry has evolved its own way of talking about measurement and they are all different. Nobody agrees what engagement means or how ROI metrics should be applied to them. Vendors have reports and technology that are narrowly adapted to their own language and techniques and cannot be standardized.

What’s the right solution?

You need a “Digital Watchdog” – either an analytics agency of record or an internal employee or department tasked with making sure that every channel you use has the right measurement, the right standards, and the right level of resources and attention.

A Digital Watchdog should be focused explicitly on measurement, measurement tools and measurement skills. That guarantees you a culture based on measurement and an appropriate skill-set to solve your measurement challenges. A Digital Watchdog should have NO vested interested in your spend. They should not manage ANY media budget or have any stake in which channels you invest in or use.

That’s what you should expect of a Digital Watchdog. Here’s what they should expect of you.

A Digital Watchdog needs to be given a cross-channel view of your customers and measurement. They need to see and have access to all your marketing spend and agency reporting. A Digital Watchdog needs to be able to create or collaborate on the creation of a comprehensive view of measurement standardization. As long as you allow each channel to measure itself its own way, you can’t expect ANYONE to make sense of the whole picture.

There are some key steps when it comes to getting started with a Digital Watchdog. Usually, you’ll start with review of the measurement in-place for each channel – is it complete, accurate, and robust? Having the basic measurement infrastructure in place (and knowing it’s right) is essential.

The second step is typically the creation of a standardized measurement framework (based on segmentation) that can be applied to every channel. Useful measurement begins with audience segmentation and drives across your business naturally – not by forcing your business into artificial measurement constructs.

Once you’ve got a good framework in place, it’s time to execute both Media-Mix and Attribution Modeling to understand spending interactions and optimization. Media-Mix Modeling is your best tool for deciding how moving the levers of marketing spend by channel drive total business results. Attribution Modeling helps you understand how channels work in harmony (or at cross-purposes) when it comes to acquisition, engagement and conversion.

At the same time, you’ll want to identify the holes and gaps where your agency measurement isn’t adequate, where their performance is sub-optimal, or where you’re not getting the attention you deserve. Your Digital Watchdog should drive channel-specific optimizations for “problem” agencies and help you evaluate how to get more from your relationships.

With the dollars at stake in today’s marketing world, there’s just too much at stake to count on your agencies doing the right thing with measurement. They are in the wrong place, with the wrong tools, the wrong motives and the wrong skill sets to do the job right.

Bob Heyman is a keynote speaker at Optimization Summit 2011, and all attendees will receive a copy of his book, “Marketing by the Numbers: How to Measure and Improve the ROI of Any Campaign,” provided by HubSpot.

Gary Angel, President and CTO, Semphonic, contributed heavily to this blog post as well.

Related Resources:

Optimization Summit 2011

Marketing Strategies: Is performance-based vendor pricing the best value?

New Chart: What Social Metrics are Organizations Monitoring and Measuring?

Maximize your Agency ROI

Photo attribution: Randy Robertson

Social Media Measurement: Big data is within reach

April 28th, 2011 2 comments

Should marketers wait for a grand unified theory of social media ROI measurement, or confidently move forward with what they have available to them now?

This question has been at the forefront of my thinking, as we proceed with MarketingSherpa’s joint research project with Radian6 to discover a set of transferable principles, if not a uniform formula to measure social media (SoMe, pronounced “so me!”) marketing effectiveness.

As I have written previously, some of the popular measurement guidelines provide a degree of comfort that comes from having numbers (as opposed to just words and PowerPoint® slides), but fail to connect the marketing activity to bottom-line outcomes.

To help think through this, I spoke with several practitioners to get some feedback “from the trenches” during SoMe Week here in NYC. With their help, I broadly defined two approaches.

Approach 1: Brave the big data

Take large volumes of diverse data, from both digital and traditional media, and look for correlations using “real” big-data analysis. This analysis is performed on a case-by-case basis, and the overarching principles are the well-established general statistical methods, not necessarily specifically designed for marketers.

Pros

  • The methodologies are well established
  • There are already tools to help (Radian 6, Alterian, Vocus, etc)

Cons

  • Most marketers are not also statisticians or have the requisite tools (e.g., SAS is an excellent software, but it comes with a premium price)
  • Comprehensive data must be available across all relevant channels, otherwise the validity of any conclusions from the data rapidly evaporates (Radian6 announcement of integrating third-party data streams like Klout, OpenAmplify and OpenCalais in addition to existing integration with customer relationship management (CRM), Web analytics, and other enterprise systems certainly helps)
  • In the end, it’s still conversation and not conversion without attribution of transactional data

If the volume of data becomes overwhelming, analytical consulting companies can help. NYC-based Converseon does precisely that, and I asked Mark Kovscek, their SVP of enterprise analytics, about the biggest challenges to getting large projects like this completed efficiently. Mark provided several concrete considerations to help marketers think through this, based on Converseon’s objectives-based approach that creates meaningful marketing action, measures performance, and optimizes results:

  • Marketers must start with a clear articulation of measurable and action-oriented business objectives (at multiple levels, e.g., brand, initiative, campaign), which can be quantified using 3-5 KPIs (e.g., Awareness, Intent, Loyalty)
  • Large volumes of data need to be expressed in the form of simple attributes (e.g., metrics, scores, indices), which reflect important dimensions such as delivery and response and can be analyzed through many dimensions such as consumer segments, ad content and time
  • The key to delivering actionable insights out of large volumes of data is to connect and reconcile the data with the metrics, with the KPIs, and with the business

How much data is enough? The answer depends on the level of confidence required.  Mark offered several concrete rules of thumb for “best-case scenario” when dealing with large volumes of data:

  • Assessing the relationship of data over time (e.g., time series analysis) requires two years of data (three preferred) to accurately understand seasonality and trend

-   You can certainly use much less to understand basic correlations and relationships.  Converseon has created value with 3-6 months of data in assessing basic relationships and making actionable (and valuable) decisions

  • Reporting the relationship at a point in time requires 100-300 records within the designated time period (e.g., for monthly listening reporting, Converseon looks for 300 records per month to report on mentions and sentiment)

-   This is reasonably easy when dealing with Facebook data and reporting on Likes or Impressions

-   However, when dealing with data in the open social graph to assess a brand, topic or consumer group, you can literally process and score millions of records (e.g., tweets, blogs, or comments) to identify the analytic sample to match your target customer profile

  • Assessing the relationship at a point in time (e.g., predictive models) requires 500-1000 records within the designated time period

Understanding the theoretical aspects of measurement and analysis, of course, is not enough. A culture of measurement-based decision making must exist in the organization, which means designing operations to support this culture. How long does it take to produce a meaningful insight? Several more ideas from Converseon:

  • 80% of the work is usually found in data preparation (compiling, aggregating, cleaning, and managing)
  • Reports that assess relationships at a single point in time can be developed in 2-3 weeks
  • Most predictive models can be developed in 4-6 weeks
  • Assessing in-market results and improving solution performance is a function of campaign timing

Finally, I wanted to know what marketers can do to make this more feasible and affordable. Mark recommends:

  • Clearly articulate business objectives and KPIs and only measure what matters
  • Prioritize data
  • Rationalize tools (eliminate redundancy, look for the 80% solution)
  • Get buy-in from stakeholders early and often

In my next blog post on this topic, I’ll discuss an approach to SoMe measurement that trades some of the precision and depth for realistic attainability—something that most marketers that can’t afford the expense or the time (both to learn and to do) required to take on “big data.”

Related Resources

Social Media Marketing: Tactics ranked by effectiveness, difficultly and usage

Always Integrate Social Marketing?

Inbound Marketing newsletter – Free Case Studies and How To Articles from MarketingSherpa’s reporters

Social Marketing ROAD Map Handbook

Marketing Research: How asking your customers can mislead you

February 25th, 2011 No comments

In a recent blog post for our sister company MarketingExperiments, I shared my experiences at the fifth Design for Conversion Conference (DfC) in New York City. Today, I want to focus on a topic from Dr. Dan Goldstein’s presentation, and its relevance to usability and product testing for marketers — how focus group studies can effectively misrepresent true consumer preferences.

Asking you for your input on our Landing Page Optimization survey for the 2011 Benchmark Report has firmly planted the topic of surveys at the forefront of my thinking.

Calibration is not the whole story

The need to calibrate focus group data is well recognized by marketers and social scientists alike. The things marketers want to know the most – such as “intent to purchase” – is more obviously susceptible to misleading results. It’s easy to imagine that when people are asked what they would do with their money in a hypothetical situation (especially when the product itself is not yet available), naturally their answers are not always going to represent actual behavior when they do face the opportunity to buy.

However, mere calibration (which is a difficult task, requiring past studies on similar customer segments, where you can compare survey responses to real behavior) is not enough to consider. How we ask the question can influence not only the answer, but also the subsequent behavior, about which the respondent is surveyed.

Dr. Goldstein pointed me to an article in Psychology Today by Art Markman, about research into how “asking kids whether they plan to use drugs in the near future might make them more likely to use drugs in the near future.” Markman recommends that parents must pay attention to when such surveys are taken, and make sure that they talk to their children both before and after to ensure that the “question-behavior effect” does not make them more likely to engage in the behaviors highlighted in the surveys. The assumption is that if the respondent is aware of the question-behavior effect, the effect is less likely to work.

Question-Behavior Effect: The bad

If your marketing survey is focused on features that your product or service does not have—whether your competitors do or do not—then asking these negative questions may predispose your respondents against your product, without them even being aware of the suggestion. This is especially worrisome when you survey existing or past customers, or your prospects, about product improvements. Since you will be pointing out to them things that are wrong or missing, you run a good chance of decreasing their lifetime value (or lead quality, as the case may be).

Perhaps the survey taker should spend a little extra time explaining the question-behavior effect to the respondent before the interaction ends, also making sure that they discuss the product’s advantages and successes at the end of the survey. In short, end on a positive.

Question-Behavior Effect: The good

However, there is also a unique opportunity offered by the question-behavior effect: by asking the right questions, you can also elicit the behavior you want. This means being able to turn any touch point—especially an interactive one like a customer service call—into an influence opportunity.

I use the word “influence” intentionally. Dr. Goldstein pointed me to examples on commitment and consistency from Robert Cialdini’s book Influence: Science and Practice, such as a 1968 study conducted on people at the racetrack who became more confident about their horses’ chance of winning after placing their bets. Never mind how these researchers measured confidence—there are plenty of examples in the world of sales that support the same behavioral pattern.

“Once we make a choice or take a stand, we will [tend to] behave consistently with that commitment,” Cialdini writes. We want to feel justified in our decision. Back in college, when I studied International Relations, we called it “you stand where you sit”—the notion that an individual will adopt the politics and opinions of the office to which they are appointed.

So how does this apply to marketing? You need to examine all touch points between your company and your customers (or your audience), and make a deliberate effort to inject influence into these interactions. This doesn’t mean you should manipulate your customers—but it does mean that you shouldn’t miss an opportunity to remind them why you are the right choice. And if you’re taking a survey—remember that your questions can reshape the respondents’ behaviors.

P.S. From personal experience, do you think being asked a question has influenced your subsequent behavior? Please leave a comment below to share!

Related Resources

MarketingSherpa Landing Page Optimization Survey

Focus Groups Vs. Reality: Would you buy a product that doesn’t exist with pretend money you don’t have?

Marketing Research: Cold, hard cash versus focus groups

Marketing Research and Surveys: There are no secrets to online marketing success in this blog post

MarketingSherpa Members Library — Are Surveys Misleading? 7 Questions for Better Market Research

Email Marketing: Show me the ROI

February 3rd, 2011 4 comments

After squinting at my screen for weeks trying to read the MarketingSherpa 2011 Email Marketing Benchmark Report PDF, I finally have a hard copy sitting on my desk — and it’s bursting with insight.

Having read the executive summary weeks earlier, I flipped through the chapters today and was struck by this stat:

Does your organization have a method for quantifying ROI from email marketing?

  • No: 59%
  • Yes: 41%

Email marketing can be amazingly efficient. B2C marketers report an average 256% ROI from the channel — pulling in $2.56 for every $1 invested — as mentioned later in the report.

What shocks me is that 59% of email marketers have not gauged their program’s efficiency. This means their company executives are likely unaware of the amazing job they’re doing. Even if executives have seen the clickthrough and conversion rates, they’re likely thinking about that line from Jerry Maguire.

Show me the moneyShow me the money

At last week’s Email Marketing Summit, Jeanne Jennings, Independent Consultant and MarketingSherpa Trainer, shot holes in many of the excuses she’s heard for why companies can’t calculate email’s ROI.

Here are three she highlighted:

  1. Our Web analytics software doesn’t provide this information
  2. We can’t track online sales back to email
  3. We don’t have an exact figure for costs

Taking these one at a time, Jennings noted that 1) most analytics solutions can provide the information. Google Analytics does and it’s free. 2) Setting up the tracking is simple. 3) You don’t need exact figures.

“As long as you can compare in an apples-to-apples fashion, that’s enough to get started,” Jennings said.

Judging performance by clickthrough and conversion rates is not enough — you should know the revenue generated, both on a campaign-level and a broader program-level.

Two simple calculations Jennings suggested:

  • Return on investment: Net revenue / cost
  • Revenue per email sent: Net revenue / # of emails sent

On a campaign-level, these metrics will reveal which campaigns pull in more money — not just more clicks. For your overall program, they quickly convey the importance of your work.

Also: The movers and shakers in your company are going to be much more impressed with figures that include dollar signs.

Show email’s potential

Another way to convince executives of email’s power is to point to success at other companies. Also at the Email Summit last week, Jeff Rohrs, VP, Marketing, ExactTarget, mentioned Groupon as a great example that email marketers could rally around.

Forbes recently dubbed the localized deal-of the-day website the fastest growing company ever, and its success is largely due to great email marketing.

The Wall Street Journal mentioned Groupon’s 50 million email subscribers as a competitive advantage and that some analysts estimate its value at $15 billion.

The executives will care

Once you can clearly attribute revenue and ROI to email, you might be surprised at how much attention you attract from company leaders.

At the Email Summit, Philippe Dore, Senior Director, Digital Marketing, ATP World Tour, presented his team’s email strategy to sell tickets to professional tennis events. A single email drove over $1 million in revenue, and several others brought in over $100,000 each.

The overall email campaign generated about $1.5 million in total. Suddenly, ATP’s executives were interested.

“We have our CMO talking about email marketing and subject lines,” Dore said.

Related resources

Email Marketing Summit 2011: 7 Takeaways to improve results

Email Marketing Awards 2011 Winners Gallery: Top campaigns and best results

Live Optimization with Dr. Flint McGlaughlin at Email Summit 2011

MarketingSherpa 2011 Email Marketing Benchmark Report

MarketingSherpa Email Essentials Workshop Training with Jeanne Jennings

Photo by: SqueakyMarmot

Marketing Research: Cold, hard cash versus focus groups

December 9th, 2010 4 comments

“The best research is when individuals pull out their wallet and vote with cold, hard cash.” – my first boss

My first experience in marketing was working with a specialized publishing company. I had the privilege to work on exciting products with sexy topics such as “human resource compliance regulations.” Trust me when I tell you there is no better ice-breaker at a party than talking about a ground-breaking court ruling that will change how your company meets compliance of the Fair Labor Standards Act (FLSA).

As a publisher, we used direct-response marketing to drive sales, with an aggressive program of direct-mail, email and telemarketing. And when it came to new product development, we were big believers in research. From customer surveys to industry research to focus groups, we used it all to make the best possible decision. At least, that was the general assumption…

Out of focus

You always have to test because many research tactics just help you achieve a best guess. And while a best guess is often closer to the truth than a random guess, it’s sometimes widely off the mark. In fact, I learned a valuable lesson one day when our company performed a focus group.

The members of this particular focus group were subscribers of a paid newsletter, and we knew that each person had subscribed by responding to a specific direct mail piece. That mail piece was extremely effective, with a powerful but somewhat provocative subject line and letter. Many people loved that direct-mail piece, but many hated it, so we wanted to get the opinion of the focus group members. When we showed the group the direct-mail piece and asked them if they would respond to that piece, 40 percent said they would never respond (if they only knew what we knew). Wow, we were shocked!

So, should we conclude that those 40% were bold-faced liars? Not necessarily. What we can conclude is that what people say they will do and what they actually do may be totally different. That is why research is only part of the equation, but if you want to sleep well at night, you have to take the next step…

Voting with their wallets

At the end of the day, the best research was when we tested the product and let the customers in the marketplace determine with their wallet if it was a viable product. We would test critical elements, like book title and price, and very quickly we would know if we had a winner or not.

Yes, all of the surveys and research were necessary to get started, but the most critical research was in our testing program. Testing is an amazing research tool. Regardless of the conversion you are trying to achieve, when your prospect takes (or doesn’t take) an action, you a have a valuable piece of information. Your conversion goal may be an event ticket sale, a white paper download, an email newsletter signup, or hundreds of other possible actions, but one thing never changes – the action you are seeking to drive can be tracked.

And if you’re ready to measure when your prospect engages with you, that is when the learning begins.

So, I’m thankful for that boss early in my career telling me repeatedly that the best research is when individuals pull out their wallet and vote with cold-hard cash. Over the years, I’ve had many experiences when individuals tell me they are going to do something but until they actually do it, I’m a little skeptical. (Editor’s Note: It’s true. Todd told me he was going to write a blog post for quite awhile. Now, I believe it.)

So gather as much research as possible, but always remember that cold, hard cash is a pretty sweet piece of research.

Related resources

Are Surveys Misleading? 7 Questions for Better Market Research (Members Library)

Marketing Research and Surveys: There are no secrets to online marketing success in this blog post

Focus Groups Vs. Reality: Would you buy a product that doesn’t exist with pretend money you don’t have?

Never Pull Sofa Duty Again: Stop guessing what your audience wants and start asking

Measuring Social Site Traffic

June 9th, 2010 3 comments

People who follow your social media updates are likely fans of your brand. Their motivations may vary, but if they’re reading and responding to your content, then they know who you are and they like hearing from you.

“It’s not really surprising that, like search traffic, social media traffic tends to be very qualified,” says Maura Ginty, Senior Manager, Search and Social, Autodesk. “It can be small in volume, but it’s really qualified.”

Ginty’s team uncovered this insight by monitoring social media traffic to Autodesk’s website and analyzing the actions visitors took after arrival. Obtaining data around social media is not difficult, Ginty says. The hard part is using it.

“I think people end up feeling like the data is going to answer their question, but it’s the interpretation of all that data and the filtering of all that volume that really helps provide insight into what to do next,” she says. (Keep an eye on our Great Minds newsletter for an up-coming article on how to improve social media measurement).

Ginty’s team started synthesizing data to uncover the social impact of online marketing campaigns, in part, by using a tool created with Covario. For example, the team can calculate the velocity of a marketing message — the number of people a message reaches in a certain amount of time in social media — and combine it with a sentiment analysis. This information helps the team gauge how quickly messages spread, how people respond, and which efforts have strong social appeal.

“We’ve seen from different areas that a lot of the push of information will end up happening in the first 24 hours,” Ginty says.

Social media is a new channel with unique brand/customer interactions that can be tested and measured. I am excited to see how other industry leaders will start measuring and tweaking their social efforts to improve everything from brand image to conversion rates.

Are you doing any testing in social? Let us know in the comments…

SEO Metrics to Measure

February 23rd, 2010 4 comments

Natural search marketers have been in a precarious position for the last few years. Much of the data they’re using is supplied by search engines, and some of that data is fuzzy at best.

Adam Audette, in a Search Engine Land post today, goes as far as calling some of the data unreliable and “downright misleading.” However, Audette astutely notes that marketers need the data even if they don’t completely trust it.

What’s a marketer to do? Here are Audette’s suggestions for the SEO metrics you should track:
o Percentage of overall site traffic from search
o Percentage share of each engine
o Free search traffic at the keyword level, clustering related terms
o Difference between branded and non-branded search traffic

Metrics that he implies are far less reliable:
o Ranking reports
o Indexed page counts
o Backlink counts
o Toolbar PageRank

For marketers, I would add conversion data to Audette’s list of primary metrics to measure — especially conversion data for non-branded keywords. If you’re a natural search marketer, any conversions you can prove came through non-branded keyword searches point directly to money you are bringing the company.

Branded search conversions are great, but they show that the searcher already knew your brand. The searcher has likely been reached by another marketing channel. A non-branded conversion implies that someone chose you over the competitors also listed in the results.

Which metrics do you consider vital? And how reliable are they?