Social Media Measurement: Big data is within reach
Should marketers wait for a grand unified theory of social media ROI measurement, or confidently move forward with what they have available to them now?
This question has been at the forefront of my thinking, as we proceed with MarketingSherpa’s joint research project with Radian6 to discover a set of transferable principles, if not a uniform formula to measure social media (SoMe, pronounced “so me!”) marketing effectiveness.
As I have written previously, some of the popular measurement guidelines provide a degree of comfort that comes from having numbers (as opposed to just words and PowerPoint® slides), but fail to connect the marketing activity to bottom-line outcomes.
To help think through this, I spoke with several practitioners to get some feedback “from the trenches” during SoMe Week here in NYC. With their help, I broadly defined two approaches.
Approach 1: Brave the big data
Take large volumes of diverse data, from both digital and traditional media, and look for correlations using “real” big-data analysis. This analysis is performed on a case-by-case basis, and the overarching principles are the well-established general statistical methods, not necessarily specifically designed for marketers.
- The methodologies are well established
- There are already tools to help (Radian 6, Alterian, Vocus, etc)
- Most marketers are not also statisticians or have the requisite tools (e.g., SAS is an excellent software, but it comes with a premium price)
- Comprehensive data must be available across all relevant channels, otherwise the validity of any conclusions from the data rapidly evaporates (Radian6 announcement of integrating third-party data streams like Klout, OpenAmplify and OpenCalais in addition to existing integration with customer relationship management (CRM), Web analytics, and other enterprise systems certainly helps)
- In the end, it’s still conversation and not conversion without attribution of transactional data
If the volume of data becomes overwhelming, analytical consulting companies can help. NYC-based Converseon does precisely that, and I asked Mark Kovscek, their SVP of enterprise analytics, about the biggest challenges to getting large projects like this completed efficiently. Mark provided several concrete considerations to help marketers think through this, based on Converseon’s objectives-based approach that creates meaningful marketing action, measures performance, and optimizes results:
- Marketers must start with a clear articulation of measurable and action-oriented business objectives (at multiple levels, e.g., brand, initiative, campaign), which can be quantified using 3-5 KPIs (e.g., Awareness, Intent, Loyalty)
- Large volumes of data need to be expressed in the form of simple attributes (e.g., metrics, scores, indices), which reflect important dimensions such as delivery and response and can be analyzed through many dimensions such as consumer segments, ad content and time
- The key to delivering actionable insights out of large volumes of data is to connect and reconcile the data with the metrics, with the KPIs, and with the business
How much data is enough? The answer depends on the level of confidence required. Mark offered several concrete rules of thumb for “best-case scenario” when dealing with large volumes of data:
- Assessing the relationship of data over time (e.g., time series analysis) requires two years of data (three preferred) to accurately understand seasonality and trend
– You can certainly use much less to understand basic correlations and relationships. Converseon has created value with 3-6 months of data in assessing basic relationships and making actionable (and valuable) decisions
- Reporting the relationship at a point in time requires 100-300 records within the designated time period (e.g., for monthly listening reporting, Converseon looks for 300 records per month to report on mentions and sentiment)
– This is reasonably easy when dealing with Facebook data and reporting on Likes or Impressions
– However, when dealing with data in the open social graph to assess a brand, topic or consumer group, you can literally process and score millions of records (e.g., tweets, blogs, or comments) to identify the analytic sample to match your target customer profile
- Assessing the relationship at a point in time (e.g., predictive models) requires 500-1000 records within the designated time period
Understanding the theoretical aspects of measurement and analysis, of course, is not enough. A culture of measurement-based decision making must exist in the organization, which means designing operations to support this culture. How long does it take to produce a meaningful insight? Several more ideas from Converseon:
- 80% of the work is usually found in data preparation (compiling, aggregating, cleaning, and managing)
- Reports that assess relationships at a single point in time can be developed in 2-3 weeks
- Most predictive models can be developed in 4-6 weeks
- Assessing in-market results and improving solution performance is a function of campaign timing
Finally, I wanted to know what marketers can do to make this more feasible and affordable. Mark recommends:
- Clearly articulate business objectives and KPIs and only measure what matters
- Prioritize data
- Rationalize tools (eliminate redundancy, look for the 80% solution)
- Get buy-in from stakeholders early and often
In my next blog post on this topic, I’ll discuss an approach to SoMe measurement that trades some of the precision and depth for realistic attainability—something that most marketers that can’t afford the expense or the time (both to learn and to do) required to take on “big data.”
Inbound Marketing newsletter – Free Case Studies and How To Articles from MarketingSherpa’s reporters