Ecommerce: Northwestern University study on how online reviews affect sales
Every week (as the name suggests), I write the Marketing Sherpa Chart of the Week email newsletter. And so, every week, I come across interesting research and data, along with sources that add analysis and color to that research.
Usually, that analysis is confined to the MarketingSherpa Chart article. However, this week, my cup especially runneth over with good ideas and analysis that I thought you might find helpful on your ecommerce sites, especially as you set the groundwork for your holiday marketing initiatives.
When I interviewed Tom Collinger, the Executive Director of the Spiegel Research Center at Northwestern University, and Edward Malthouse, professor at Medill Northwestern and the Research Director of the Spiegel Center, we went well over our allotted time.
You can see their data and some of their analysis in this week’s Chart of the Week article — Ecommerce Chart: Star ratings’ impact on purchase probability. But if you’d like a deeper understanding of their research into how online reviews affect sales, I’ve included a lightly edited transcript of our conversation below. To make the transcript easily scannable for you, I call out key points with bolded subheads
Bringing evidence to the answer of how newer forms of consumer engagement with brands drive financial impact
Daniel Burstein: Why don’t we jump in and you give me a high level of the type of work you’re doing here? I believe, Tom, we may have had you as a source in the past at one point.
Tom Collinger: Yeah. So, a little bit of background — Ed and I both worked with a friend, mentor and former colleague who has since passed away, Ted Spiegel, the grandson of Spiegel Catalog Company. Ted, in his last several years, put his inspiration and money where his mouth was in the endowment of the Spiegel Research Center, an organization with a single-minded focus of bringing evidence to the answer to how newer forms of consumer engagement with brands drive financial impact.
So, given that he came out of the mail order business, that’s basically how mail order marketers think, but the focus being on the newer forms of engagement that are really not those that had been and continue to not be very well-researched. So, as a consequence, we stepped forward and sought the kinds of data sets that would allow us to do that beginning with an uncommon data set.
All of what I’m saying at a high-level is available on our website with practitioner-facing explanations of the research. For those of you who are interested in the way in which we have published this work in academic journals, all of that’s available too. So, we took on a common question (that we were able to get a data set for) of the relationship between social media posting and viewing, and the financial consequence of an individual’s viewing and posting.
We were able to get on the consequence of a mobile app, branded mobile app adoption and usage. We were able to get on, as you can see, to the impact and influence of customer reviews. We have moved into B2B, and we just spent the last nine months working with Deloitte and answering the question, “What are all of the ways in which they seek to connect and engage with clients and prospective clients across all digital as well as offline environments, and how does that engagement result in business?”
Ed Malthouse: So, content marketing.
Tom: Broadly, the financial consequence of how consumers connect and engage with brands, particularly in those areas that have historically not been well-researched or researched at all, and doing it with engagement and transaction data. So, it’s always facts and evidence based upon actual performance rather than self-reported survey.
Only five-star ratings is “too good to be true”
Dan: Then let me ask you, based on looking at the data in this chart, what meanings you make of it and what advice would you give to marketers based on it?
Ed: So, this is the “too good to be true” chart.
What I take away from it is that when consumers see all five-star reviews, they become skeptical that it’s too good to be true, that there’s some manipulation happening. Therefore, what we see in that chart is if you have an average star rating in the mid-fours, the purchase probabilities are higher than if it’s at five, which means having some negative reviews in there actually adds credibility to you.
I guess the point I would make is reviews work — as long as they’re credible. Retailers should not suppress negative reviews because it damages the credibility of the review ecosystem. The retailer’s interest is in providing useful information to the buyers who come to that website.
Tom: So, let me just pile on that. The data point, which is actually significant, is that there is an obvious tendency [to suppress negative reviews]. In fact, there have been some very bad practices that you’re aware of that have been somewhat marginalized, as we understand it, in the very recent past, practices that have bought from consumers, bought or incented five-star ratings, things that would argue on behalf of challenging the voracity of a rating.
I think what this tells us across all these categories because the data holds — the slope is the same because the consumer is not stupid. What the consumer has more or less said is, “We don’t believe in nothing but five stars. We don’t believe in it.” That’s what their behavior reflects. So, Ed’s message about not eliminating negative reviews is a business practice. It’s not a philosophy. It flies in the face of what natural marketing response might otherwise be.
So, it isn’t to say you’re happy that you’re getting negative reviews, but in fact, the presence of some negative reviews apparently increases the validity and authenticity such that people will purchase more when there is the presence of something other than pure fives. That’s significant, we think. That’s one.
Number two is that the actual act of a consumer engaging with the review, by that I mean, as Ed referenced, clicking on the review to scroll down and read the reviews beyond the star ratings, has a complementary effect as well on purchase behavior. So, engagement with the data and engagement with the review correlates with value.
When consumers actually take the time to read reviews, they trust them more
Ed: The only thing I’m going to add to that (that’s our four-way paper) is that when consumers go down to read reviews, they trust them more, so they follow them more. The distinction is between just seeing four stars with 50 reviews in the corner of your eye versus you actually going to read the review.
If you actually go read the review, you’re going to pay more attention to it. If the reviews are saying it’s a two-star product, you’re much less likely to buy than if it’s a four-and-a- half-star product. Whereas if you’re just getting exposed to that in the corner of your eye, you’re not actively reading it, you pay less attention to it, so there’s less of a difference in your purchase probability between a two-star and four-star product.
Rank reviews chronologically
Tom: This gets into the weeds, but it’s still answering your question about giving advice for marketers. There are some marketers who rank the presence of the reviews chronologically when they were posted, others who do it by high to low. I let it exist chronologically. In fact, it might even tilt in favor of having them be inclusive of a representative sample of the reviews. That’s another action on behalf of the marketer. But if, in fact, it results in a more representative review sample, then that actually might be desirable.
Proactively ask for reviews from verified buyers
Ed: I can comment about the more representative sample part. We have some other work that talks about two ways of getting reviews. One way is you just go to the website; you find the product. Maybe you have to establish a login, and you type in your review. The other way is that you buy a product, and you get an email saying, “You just bought this product, please write a review.”
Of the two types of reviews, one is an email-solicited review from a verified buyer; the other is not necessarily from a verified buyer. It could be from someone off the street. There’s a systematic difference between those reviews. The reviews that come from the person off the street tend to be more negative, anywhere from a half to a full-scale point more negative than the reviews that you get from the email solicitations.
Our explanation for this is that it’s a whole lot more work. There’s a lot more friction to you having to find that website, you having to find that page of the product you want a review for, you having to create a login, perhaps, than you just kind of clicking on something in an email. Therefore, the only people who are willing to go through all that effort to find all the webpages and logins and all that — are those who have an ax to grind — whereas the email solicited reviews are going to get a broader cross-section of all people who you know are buyers. Therefore, review companies should really be soliciting reviews from their customers, from their verified buyers.
Tom: And that’s a term, by the way, that increasingly is used to distinguish the two types.
Ed: You’ll see badges next to reviews. When you read reviews, this is from a verified buyer, this is not from a verified buyer. So, lots of reviews on, say, Amazon are not from verified buyers. Our research is saying that you want to get a broad cross-section of your verified buyers to write reviews so that it’s representative of your entire user base and not just the angry people.
Reviews not as important for habitual products after consumers have made first purchase
Tom: There’s another observation that came out of this research, which is that there are reasons why reviews no longer influence a particular purchaser. So we observed that, and we have theories as to why that happens, and they seem to be plausible theories. [For example,] I no longer care to read a review for my toothpaste because I’m buying Colgate no matter what, so I don’t need to read a review.
Ed: It’s a habitual product and you know a lot about it.
Tom: The term that Ed just used, habitual purchase is our theory. When we went and looked at all the circumstances in which there are people who don’t look at the reviews and still buy the product. The theory is it’s the kinds of products where they don’t need any more information; they’ve made their choice. They’re in the brand camp and off we go.
Now, the consequence of that to marketers, the advice to marketers is that when a marketer has the opportunity to invite a consumer (in whatever the strategy is) to more or less signal “this is my brand; when I’m in this category, it’s the one I want,” and make it easy for them to buy it — brilliant strategy. For example, you can subscribe and save on Amazon now. You basically say, “Yeah, I love this brand of coffee. I’m going to drink it. Every three weeks, I need another pound.” Bingo! Once I’ve done that, I’ve been locked in, signaling this is the only brand that I care about for me.
Clearly, marketers have gotten smart about how to do that. So, “subscribe and save” is another one. If you go on grocery sites like Peapod, they have genius lists that basically remind you of the things you’re habitually buying. So, all the strategies that enable marketers to effectively get a consumer to say — this particular brand and this particular category is where I’m going to continue to go and you don’t need to continue to sell or promote to me in these other spaces — is smart business. Obviously we’re seeing that happen, but our customer review analysis continues to point in that direction as well.
The riskier the consumer’s decision, the more reviews matter
Tom: Because we saw so many categories (and these are just five), we ended up seeing the differences (this particular chart seems to reflect) as if we had mapped every single category, seeing the entire range of different slopes.
And our hypothesis — which is a strong belief, but it’s not sufficiently a theory that we have what we feel is enough broad evidence to support — is that when faced with decisions that have a greater degree of uncertainty and/or risk to the individual, the influence of review matters more. This is why if you’re buying athletic shoes (especially if you’re buying athletic shoes from an ecommerce environment), the uncertainty is great. This explains the shape of that slope.
When we get into other categories not reflected by this chart (uncertainty purchases), you’re buying a gift … buying products that you don’t buy regularly, there’s uncertainty there. When there’s something that has to do with your health, those are things that lead in the direction of that … those are theories.
Dan: OK. Let me ask you too. This is one of the things I was thinking while looking at the chart — that the level of impact would also be significant because, for example, the level of impact of the light bulb would also be a lot less than the level of impact of women’s athletic shoes they’d be wearing frequently. However, something like herbal vitamins — which I would assume would have a huge level of impact since you can perhaps damage or kill yourself with the herbal vitamins — didn’t have as big a difference as I expected.
Ed: So, if you go back to what Tom was saying, I would predict that with light bulbs, it depends on your uncertainty. If it’s an incandescent light bulb and you’ve been using incandescent light bulbs since you were a kid, there’s not a lot of product uncertainty. There’s not a lot of risk in the purchase of a $0.50 light bulb. But if it’s a light bulb, an LED light bulb or a bulb for projecting onto the screen where you’re paying a fair amount of money, and you don’t necessarily know the category that well, that’s when you’re really going to pay attention to the reviews.
Dan: OK.
Ed: Likewise with herbal vitamins, you’re right. If it’s a vitamin you’ve been taking every day, you don’t need to read a review on it. But if you have a new ailment, and you’re looking for an herbal vitamin to cure that and you don’t have much experience in the category, you’re going to pay attention to the reviews.
Number of reviews needed for credibility
Ed: The number of reviews is what I’m going to call a “moderator.” So, if you only have one or two reviews, consumers don’t trust them as much as if you have, say, 10 or more reviews, then consumers trust the reviews more in that the purchase probabilities depend much more on the stars. For example, on a two-star product, it gets bought less often than a four-star product or five-star product, whereas if you only have one or two reviews, the relationship between the stars and the purchase probabilities is much weaker. A consumer might see the two-star review, but there’s only one and “I’ll just write that off.”
Tom: Yeah. I’m piling onto what we talked about earlier. The consumer is not stupid. By that I mean, “What’s the lesson for a marketer in all of this?” — The consumer is smart. The consumer knows more than you think. The consumer cannot be controlled.
So, all of the things that marketers might have attempted to do, or in some cases still attempt to do, in order to alter the natural consequences of consumer response, tend not to work because, number one, the consumer has already said they don’t trust all five-star reviews. Right? So, the other evidence is that fewer reviews at the very low end [of the spectrum] instill greater confidence. Why? The consumer is not stupid.
I feel like that’s a wonderful and encouraging message for marketers, which basically is — fix the product, fix the customer experience. Don’t worry about fixing or rigging the customer review environment. Fix the way in which it works, and good things will happen.
Should you copy edit reviews that you use in advertising and marketing?
Dan: What some marketers like to do — we know the power of social proof and testimonials — is to take your review and publish it in some way through marketing, through advertising, to show, “Here’s the words that a real customer said about my product.” When that comes up, there’s essentially no quality control in the writing of these reviews. Some marketers feel like they should copy edit it, actually make changes to it to look more professional, and some marketers publish it as is with typos to make it look more authentic — copy editing and cleaning them up so there’s not grammatical errors or mistakes, not actually changing what the person says. Do you have any opinion on that either way, any research on that?
Tom: We don’t have any research on it. We have opinions. Ed?
Ed: If you’re putting quotes around it, that means you don’t touch it.
Reviews are a stronger predictor of purchase than ratings
Ed: We’re messing around with a lot of computer sentiment analysis tools right now and comparing how much does an analysis of the text of a review predict your purchase probability versus just the stars, the volume and the valence. What we’re finding is that what’s in the actual text is a stronger predictor of your purchase than just the star and the count. So, there’s a lot of stuff in the text that is not captured by the star rating.
If you read a review for a Thai restaurant and they give it four stars, well, why did they give it four stars or two stars? Then you start reading the text and they say, “The food was too spicy.” If you like spicy food, that’s not a reason to give it a two-star review in your mind, whereas if they said it’s a two-star because the service was slow and the kitchen smelled or something, then you’d say, “OK. I’m not going to go there.” The point is — what’s in the text itself is often much more valuable than just the star rating.
How negative word-of-mouth from a policy change affects subsequent purchases
Tom: To pile on that, we have a study on our site that’s not about customer reviews. It reinforces what Ed just said. It was a negative word-of-mouth study we did when a company posted a policy change (a loyalty program change in their policy), and there was a bit of a firestorm. The members posted negative commentary, and we did a text analysis. What you’ll find we discovered is that — yes, there were a percentage, but it was a real minority who used quite inflammatory language and not shockingly, their subsequent purchase went down.
But the vast majority who did not use the most inflammatory language, but it was still negative commentary, got it off their chest such that they actually ended up purchasing and using the rewards from the loyalty program more. So, we can’t make that translation here exactly, but it’s one of those very unpredictable curious outcomes from the study.
Customer-first marketing: Let the customer behavior tell the story
Dan: OK. Let me ask you just a final follow up. At MarketingSherpa, what we’ve published is inspirational stories about customer-first marketing. So, we describe customer-first marketing as different from customer-centric marketing. With customer-centric marketing, you use the data and all this great stuff to target the customer. With customer-first marketing, you’re actually changing what your conversion objective is and what your marketing is, to elevate the customer.
So, doing things like you talked about — making the right products for the customer, doing the right marketing for the customer, where the customer has this subtle feeling that the customer comes first in business — what we’ve seen from our own data and research is there was a significant difference in customer satisfaction from this. This then leads to other great things such as recommendations on purchases, which I’m sure wouldn’t surprise you.
So, just as a final follow up, based on your overall body of research and including this, what advice would you give to marketers about customer-first marketing in general?
Tom: In every marketing and communication decision point, there is a tension between what we as the company believe we want the customer to do now and what the customer may be more naturally inclined to do even with some modest encouragement. That’s a spectrum, right? There’s a tension.
On the polarities, that spectrum is, “We’re going to buy them off to do what we want them to do,” and on the other side of the spectrum is, “We’re just going to provide an environment in which the consumer can make a choice that’s right for them and everything in between.” And in general, what we find is that if you presume the most draconian side, there’s a really good chance you’re going to be wrong.
Here’s a great example. We did a study, actually, with Peapod. We looked at the consequence of downloading the Peapod mobile application on subsequent purchase behavior. Now, everything that you read would suggest that, of course, when that happens, people will buy more and use the mobile app religiously because we’re all going mobile. So, you sit in your marketing den war room and you say, “Well, OK, what we’re going to do is push the hell out of this mobile app because we know that’s what consumers are wanting,” and you keep pushing it as opposed to letting the customer behavior actually tell the story.
When we looked at the data … yes, when people downloaded the app, there was an immediate spike in their purchase behavior. But, over time, what happened is the consumer actually ended up with a multi-channel buying strategy where they would buy certain habitual products using the mobile app, and they would buy other things that required more thought and consideration using tablets and computers.
Now, there’s a great example of the thing you’re talking about, customer-first marketing. I like the language. Customer-first marketing is what works for you, how you are going to be satisfied best. The answer, in this case, was increasing the way in which people will choose by themselves how they want to buy across platforms — as opposed to driving them to make a decision that the marketer, in the absence of having this data, would otherwise be inclined to bring them, thinking but not knowing, “Well, sure, they downloaded the app, now they want to use the app, so we’re going to drive in on the app.” That’s sort of customer-centric versus customer first.
You can follow Daniel Burstein, Senior Director, Content, MarketingSherpa, on Twitter @DanielBurstein.
You might also like …
Ecommerce Chart: Star ratings’ impact on purchase probability
Email Marketing: Customer reviews boost clickthrough by 25% in ecommerce product emails
Authors: How to Get Your Business Book Published (if you’re reading this blog post because you want to get the right reviews for your book, you might be interested in this special report from the MarketingSherpa archives from 2008)
Download a free 54-page Customer Satisfaction Research Study to learn about our latest discoveries based on research with 2,400 consumers
Categories: Ecommerce Eretail customer behavior, ecommerce, online ratings, Online reviews, sales, study