Archive

Archive for the ‘Research And Measurement’ Category

How a Single Source of Data Truth Can Improve Business Decisions

September 12th, 2014 1 comment

One of the great things about writing MarketingSherpa case studies is having the opportunity to interview your marketing peers who are doing, well, just cool stuff. Also, being able to highlight challenges that can help readers improve their marketing efforts is a big perk as well.

A frustrating part of the process is that during our interviews, we get a lot of incredible insights that end up on the cutting room floor in order to craft our case studies. Luckily for us, some days we can share those insights that didn’t survive the case study edit right here in the MarketingSherpa Blog.

Today is one of those times.

 

Setting the stage

A recent MarketingSherpa Email Marketing Newsletter article — Marketing Analytics: How a drip email campaign transformed National Instruments’ data management — detailed a marketing analytics challenge at National Instruments, a global B2B company with a customer base of 30,000 companies in 91 countries.

The data challenge was developed out of a drip email campaign, which centered around National Instruments’ signature product, after conversion dropped at each stage from the beta test, to the global rollout, and finally, to results calculated by a new analyst.

The drip email campaign tested several of National Instruments’ key markets, and after the beta test was completed, the program was rolled out globally.

The data issues that came up when the team looked into the conversion metrics were:

  • The beta test converted at 8%
  • The global rollout was at 5%
  • The new analyst determined the conversion rate to be at 2%, which she determined after parsing the data set without any documentation as to how the 5% figure was calculated

Read the entire case study to find out how the team reacted to that marketing challenge to improve its entire data management process.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Customer Relevance: 3 golden rules for cookie-based Web segmentation

September 13th, 2013 No comments

Over the years, the Internet has become more adaptive to the things we want.

It often seems as if sites are directly talking to us and can almost predict the things we are searching for, and in some ways, they are.

Once you visit a website, you may get a cookie saved within your browser that stores information about your interactions with that site. Websites use this cookie to remember who you are. You can use this same data to segment visitors on your own websites by presenting visitors with a tailored Web experience.

Much like a salesman with some background on a client, webpages are able to make their “pitch” to visitors by referencing  information they already know about them to encourage clickthrough and ultimately conversion.

Webpages get this information from cookies and then use a segmentation or targeting platform to give visitors tailored Web experiences.

Cookies can also be used to provide visitors with tailored ads, but in today’s MarketingSherpa Blog post, we will concentrate on your website, and how segmentation can be used on your pages to provide more relevant information to your potential customers.

 

Test your way into cookie-based segmentation

At MECLABS, we explore cookie-based segmentation the only way that makes sense to us – by testing it.

It’s fairly easy to identify the different variables you would want to segment visitors by, but how to accurately talk to them should be researched. It’s also easy to become distracted by the possibilities of the technology, but in reality, the basic principles of segmentation still apply, as well as the following general rules.

 

Rule #1. Remember you are segmenting the computer, not the person

There are more opportunities for error when segmenting online because multiple people may use the same computer.

Therefore, online segmentation has some mystery to it. You can tailor your message to best fit the cookies, but that may not accurately represent the needs of the specific person sitting in front of the computer at that time.

Many segmentation platforms boast a 60% to 80% confidence level when it comes to how accurately they can segment visitors, but I think a better way to position this information is there is a 20% to 40% margin of error.

That is pretty high!

Be cautious with how you segment. Make sure the different experiences you display are not too different and do not create discomfort for the visitor.

For visitors who do not share a computer, error can still be high. They may be cookied for things that do not accurately describe them.

I bet if you looked at your browser history, it may not be the most precise representation of who you are as a person. Therefore, don’t take cookie data as fact because it most likely isn’t. It should be used as a tool in your overall segmentation strategy and not serve as your primary resource for information about your customers.

 

Rule #2. Be helpful, not creepy

People are getting used to the Internet making suggestions and presenting only relevant information to them.

Some have even come to expect this sort of interaction with their favorite sites. However, there is a fine line between helpful and creepy. Visitors probably don’t want to feel like they are being watched or tracked. Marketers should use the data collected about their visitors in a way that does not surpass their conscious threshold for being tracked.

For example, providing location-specific information to visitors in a certain region is alright, but providing too much known information about those visitors may not be.

Cookies can tell you income level, demographic information, shopping preferences and so much more. Combining too much known information could seem overwhelming to the visitor and rather than speaking directly to them, you risk scaring them off.

Instead of making it blatantly obvious to visitors you have collected information on them, I would suggest an approach that supplies users with relevant information that meets their needs.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Marketing Process: Managing your business leader’s testing expectations

June 25th, 2013 No comments

Every Research Partner wants a lift, but we know sometimes, those lifts aren’t achievable without learning more about their customers first.

And often, our biggest lifts are associated with radical redesign tests that really shake things up on a landing page. That is because the changes are more drastic than a single-factor A/B test that allows for pinpointing discoveries.

So, how can you strike a balance between using these two approaches while still delivering results that satisfy expectations?

You can achieve this by managing your client’s or business leader’s expectations effectively.

It sounds easier said than done, but there are a few things you can do to satisfy a client’s or business leader’s needs for lifts and learnings. 

 

Step #1. Start with radical changes that challenge the paradigm

At MECLABS, we often recommend a strategic testing cycle with radical redesign testing (multiple clusters as opposed to a single-factor A/B split) to identify any untapped potential that may exist on a Research Partner’s landing page.

However, you must make sure you are not making random changes to a page to achieve a radically different control and treatment, but are truly focused on challenging the control’s paradigms and assumptions currently being made on the page by testing with a hypothesis.

For example, Sierra Tucson, an addiction and mental health rehabilitation facility, found with a radical redesign from a landing page focused on luxury to a landing page focused on trust resulted better with its target audience. The company also generated 220% more leads with the test to boot.

 

Step #2. Zoom in on general areas your radical redesign test has identified as having a high potential for impacting conversion

Next, we suggest refining with variable cluster testing, also known as select clusters.

If you identify a radical shift in messaging to be effective, as Sierra Tucson did, you might next want to try different copy, different designs or different offers, just to name a few options.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Testing: 3 common barriers to test planning

June 14th, 2013 No comments

Sometimes while working with our Research Partners, I hear interesting explanations on why they can’t move forward with testing a particular strategy.

And as you would expect, there are a few common explanations I encounter more often than others:

  • We’ve always done it like this.
  • “Our customers are not complaining, so why change?

And my personal favorite…

  • We already tested that a few years ago and it didn’t work.

While there are some very legitimate barriers to testing that arise during planning (testing budgets, site traffic and ROI), the most common explanations of “We can’t do that” I hear  rarely outweigh the potential revenue being left on the table – at least not from this testing strategist’s point of view.

So in today’s MarketingSherpa blog post, we will share three of the most common barriers to testing and why your marketing team should avoid them.

 

The legacy barrier – “We’ve always done it like this.”

Legacy barriers to testing are decisions derived from comfort.

But what guarantee does anyone ever have that learning more about your customers is going to be a comfortable experience? So, when I receive a swift refusal to test based on “We’ve always done it like this,” I propose an important question – what created the legacy in your organization in the first place?

Generally, many companies understandably create business constraints and initiatives around what is acceptable for the market at a given point in time.

But what happens far too often is that these constraints and initiatives turn into habits. Habits that are passed on from marketer to marketer, until the chain of succession gives way to a forgotten lore of why a particular practice was put in place.

This ultimately results in a business climate in which the needs of yesteryear continue to take priority over the needs you have today.

So, if you find yourself facing a legacy barrier, below are a few resources from our sister company MarketingExperiments to help you achieve the buy-in you need to challenge the status quo:

What to test (and how) to increase your ROI today

Value Proposition: A free worksheet to help you win arguments in any meeting

 

The false confidence barrier  “Our customers are not complaining, so why change?”   

The false confidence barrier is built on the belief that if it isn’t broken, don’t fix it – or at least it isn’t broken that you’re aware of.

This is especially important if your organization is determined to use customer experience in the digital age as the metric of success when evaluating a website’s performance – and this happens more than you would think.

So, considering for a moment a hypothetical customer is having an unpleasant experience on your website, ask yourself…

What obligation does a customer have to complain about their experience to you?

My recommendation in this case is to never assume customer silence is customer acceptance.

Instead, take a deeper look at your sales funnel for opportunities to mitigate elements of friction and anxiety that may steer customers away from your objectives, rather than towards them.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Test Planning: Create a universal test planner in 3 simple steps

May 2nd, 2013 1 comment

One of my responsibilities as a Research Analyst is to manage ongoing test planning with our Research Partners and at times, keeping tests running smoothly can be a challenge.

This is especially true when you consider testing is not a static event – it’s more like a living, breathing continuous cycle of motion.

But even with so many moving parts, effectively managing test plans can be made a little easier with two proven key factors for success – planning and preparation.

Today’s MarketingSherpa blog post is three tips for test planning management. Our goal is to give marketers a few simple best practices to help keep their testing queue in good order.

 

Step #1. Create

Creating a universal test planner everyone on your team can access is a great place to start.

For our research team, we created a universal test planner including:

  • Results from prior testing with our Research Partner
  • Current active tests
  • Any future testing planned
  • A list of test statuses definitions that everyone on the team understands – (test active, test complete, inconclusive, etc.)
  • A brief description of what is being tested (call-to-action button test, value copy test, etc.)
  • A list of who is responsible for each task in the test plan

 

Step#2. Organize

As I mentioned in the previous step, the status of a test can change and, based on the results, so will the ideas and priorities for future testing.

Some tests will move forward in the queue, and others will be pushed back to a later time.

So, to help keep our team informed of changes in the testing environment, we update the planner throughout the day and in real time during brainstorming sessions based on results and Partner feedback.

This allows us to focus our research and testing strategy efforts on expanding on discoveries versus chasing our tails to keep up-to-date.

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Marketing Analytics: Managing through measurement and marketing as revenue center

April 26th, 2013 No comments

“What gets measured is what gets done.” So says the old business maxim, at least.

We wanted to know what marketers get done, so to speak, so in the 2013 Marketing Analytics Benchmark Report, we asked…

Q: Which of the following are you involved with tracking, analyzing or reporting on for your organization?

 

We asked the MarketingSherpa community about these results, and here’s what they had to say …

 

Managing through measurement

These results highlight the indifference, or perhaps lack of experience, when it comes to tracking marketing, especially social media marketing.

As these channels can be tracked offline (via call tracking) and online, via dynamic numbers and email tracking, it still seems as though there are trackers and non-trackers in terms of marketing specialists.

Even with a nudge effect of marketing across several channels, the ROI of these nudges is important and should be tracked.

The old adage of “managing through measurement” is still important and not having accurate measurement to call upon leaves marketing specialists arguing based on their opinions rather than facts. (And, that’s a sure way to the exit door).

– Boyd Butler, Consultant

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Marketing Metrics: Do your analytics capture the real reasons customers buy from you?

April 16th, 2013 6 comments

How can you track the most impactful elements of your marketing funnel? Let’s start with an analogy …

I once had a crush on a girl.  I talked to her every day, but she rarely took notice of my existence.  She liked the “bad boys,” and I was kind of a nerd.  It seemed as if the stars were aligned against us.

I tried asking sweetly, coming up with inventive date ideas, even appealing to her sense of pity, all to no avail.  Finally, after a year or so of trying, I wrote her a letter telling how I felt.  She finally accepted my invitation and we went on a date.

My takeaway from this exchange was letters work best. (Admittedly, my letters are particularly awesome.)

What I didn’t know was my letter had relatively little to do with her decision.  Years later, I asked her why she finally decided to go out with me.  She admitted my persistence played a role, but the bigger factor was how she had her heart broken by one of the afore-mentioned “bad boys,” and decided to give a nice guy a chance.

I was floored.  I had no idea these events had ever transpired, and more importantly, had vastly overestimated my letter writing ability.

What I had was essentially a last click attribution model. This is the way in which countless organizations currently measure conversions.  We, as an industry, have come a long way in terms of being excited about measuring and testing our marketing efforts.

However, looking at the last click before conversion as a sole contributor to the conversion decision is as near-sighted as assuming the young lady accepted my date invitation based upon my letter writing skills.  The letter was a factor, but it wasn’t the only factor.

I need a better model.

 

Where should I spend my marketing dollars?

Using the last click attribution method, I can determine the value of a conversion generated from an email campaign.  I might arrive at the conclusion my marketing dollars are best spent on building email lists and optimizing email campaigns.

While there may be truth in that statement, it’s only partially correct.  The real story in this scenario might be a customer first interacted with my brand when a friend shared a product review on Facebook.  From there, a likely scenario of events could be:

  • The customer visited and liked my Facebook page, and then left.
  • Weeks later, I launched a new product via Facebook post.  The customer saw the post and then left the platform to do some research.
  • While researching the new product on Google, a PPC ad appeared and convinced the customer to click through to my site.
  • Once on the website, the customer joined my email list.
  • Two weeks later, I sent an email which the customer subsequently viewed and converted, purchasing my product.

From this example, it’s obvious the customer was nurtured to conversion through a series of interactions including social media, PPC, landing pages and email.  Now, how much of my marketing dollars should go to each channel, since in this case, they were all obviously necessary for conversion?

 

Attribution models

Solving this problem requires the use of a different attribution model, and not all attribution models are created equal. I remember how happy I was when I learned there were multiple varieties of steak.  I had always eaten sirloin, because that’s what my dad always cooked.  So, you can imagine my excitement the first time I tasted filet mignon!

Similarly, there are a wide variety of attribution models to suit everyone’s taste.

One example is the linear ratio model, which is a dynamic model that attributes different values to different purchase and research phases. For instance, it might:

  • Attribute 5% of revenue to Facebook for the research and awareness piece of our sample transaction above.
  • Assign 25% of that revenue to PPC ads.
  • Finish by assigning 70% of the attribution to the email campaign that caused the click.

There are many  implications to using a model such as this. The social media manager is very happy because he just went from being a nonexistent entity in this conversion to owning 5% of the revenue.

The email manager might not be quite as happy, but the marketing executive should be thrilled.

There are many more models to experiment with. First-click, U-shaped, custom models and linear modeling are just a few. We’re getting closer to really understanding why people buy our stuff, and how they arrive on our pages.

Moreover, we’ve attributed our revenue to particular interactions along the funnel, which should get us started in the process of assigning value to each marketing activity we undertake.

To learn more about each of the above attribution models, see Google Analytics’ definitions here.

  Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Marketing Data: Using predictive analytics to make sense of big data

December 21st, 2012 2 comments

One buzz word/phrase that became very popular in business circles this year was “big data.” And, even though the term is trendy and probably overused, the overall concept has major implications for marketers.

Marketers are awash in campaign data, more so now than ever before. Email marketing campaigns produce data about open rates, clickthroughs, unsubcribes, and more. Visitor activity on company websites can be tracked, and in the case of registered users or leads flagged for scoring, that activity is not only tracked but also attributed to a particular individual.

Elements tracked can include the website visit itself and activities such as downloading Web content or watching embedded video. That tracking can get pretty granular, such as combining a series of website activities, or exactly where in an embedded video the viewer stopped the playback.

Taken as discrete pieces, all these data points are essentially meaningless. Taken together, they can provide insight into the tracked individual. Furthermore, subjected to deeper analysis, they can provide insight into what the most promising prospect or customer with the most long-term value looks like for the company.

This is where predictive analytics come into play. To provide more insight into predictive analytics and big data, I interviewed Omer Artun, CEO and founder of AgilOne, a cloud-based predictive marketing intelligence company. Omer also has an academic background in pattern recognition, data mining and complex systems.

  Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Marketing Research: Only 25% of marketers can show value to the organization

October 23rd, 2012 No comments

Recently, I had the opportunity to speak with Julie Schwartz, Senior Vice President of Research and Thought Leadership at ITSMA (Information Technology Services Marketing Association), and Laura Patterson, President of VisionEdge Marketing. Both were involved in recent marketing research, 2012 ITSMA/VEM Marketing Performance Management Survey: The Path to Better Marketing Results.

The survey was conducted during the summer of 2012 via email and social media invitation through Twitter and LinkedIn, and included 405 completed surveys.

Here is a chart outlining details of the respondents:

 

Click to enlarge

 

All respondents were analyzed by company type, company size and by a self-grading system (grade results included, and note that “D” was the lowest possible grade):

  • A – Marketing demonstrates contribution to the business: 25%
  • B – Marketing makes a difference, but contribution is not measured (these marketers were considered “middle of the pack”): 33%
  • C and D – Marketing may have an impact, but not known if impact is material (these marketers were considered “laggards”): 33% for “C” and 9% for “D”

Click to enlarge

 

Here are the key takeaways from the research:

  • Marketing’s satisfaction with its ability to measure, analyze and improve performance is shockingly low
  • Marketers are caught in a downward spiral as they report past performance to continually prove the value of marketing
  • A few exceptional marketers have cracked the code; they excel across the board in data, metrics, processes, tools, analytical skills and reporting
  • These grade “A” marketers can clearly demonstrate their value and contribution to the business
  • The number of “A” marketers has remained relatively constant over time, but we see a decline in the number of “B” marketers

Because the heart of this research was marketing performance management, the self-described grades listed above were created by the key question: What grade would the C-suite give your marketing organization for its ability to demonstrate its value and impact on the business?

Read more…

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg

Conversion Rate Optimization: Your peers’ top takeaways from Optimization Summit 2012

August 16th, 2012 1 comment

With B2B Summit 2012 right around the corner in Orlando, let’s take a quick look at your peers’ top takeaways from our last Summit – Optimization Summit 2012. In case you couldn’t be there, listen in to what fellow Web marketing directors and optimization managers learned at the Summit to help guide and prioritize your own A/B testing and landing page optimization efforts.

 


Here are some of the key takeaways. Feel free to use the links below to jump directly to these parts of the video …

0:34 – Celeste Parins of Mindvalley on value proposition

1:11 – Matt Silverstein of The Elevation Group on radical redesigns

1:40 – Matt Brutsche of Austin Search Marketing on getting into the mind of the customer

2:00 – Mike Weiss of Internet Sales Experts on understanding the customer’s path on your landing pages

2:42 – Ray Lam and Victoria Harben of the University of Denver on live optimization

3:03 – Suzette Kooyman of Enhance Your Net on taking a consumer-centric approach instead of a corporate approach

3:37 – Suzanne Axtell of O’Reilly Media on determining where to start testing and optimizing

4:10 – Reagan Miller of Financial Times on having a testing methodology

4:23 – Alan Markowitz of Ellie Mae on friction and anxiety points

4:50 – Diane Baker of netDirectMerchants on value proposition

 

Related Resources:

Optimization Summit 2012 Event Recap: 5 takeaways about test planning, executive buy-in and optimizing nonprofit marketing

Demand Generation: Optimization Summit 2012 wrap-up for B2B marketers

B2B Summit 2011: 5 takeaways on social media, lead generation, building a customer-centric approach, and more

Share and Enjoy:
  • LinkedIn
  • StumbleUpon
  • Facebook
  • del.icio.us
  • Digg