Tag

ratings

Browsing

Just let people do their thing, you’ll have more success.

Online user reviews have become an essential tool for consumers who increasingly rely on them to evaluate products and services before purchase. The business models of online review platforms such as Yelp and TripAdvisor, and e-commerce sites such as Amazon and Expedia critically depend on them. Should such sites pay users to encourage them to write reviews?

According to a forthcoming study in the INFORMS journal Marketing Science, a leading academic marketing journal, that is a bad idea. Paying users suppresses the number of reviews on social platforms, especially among those users who are socially well-connected and likely to be more influential.

The study is authored by Yacheng Sun of University of Colorado, and Xiaojing Dong and Shelby McIntyre of Santa Clara University. The authors examine user response from a sample of customers following the introduction of a monetary payment program for user reviews by a social shopping platform in China.

The payment was roughly the equivalent of 25 cents per review in credit for purchases from sellers affiliated with the platform. To the company’s surprise, the number of user reviews declined by over 30 percent in the month after the payment program was introduced, relative to the month before. “The familiar “Law of Supply,” that implies supply increases in response to higher prices, does not seem to hold true when it comes to paying for reviews on a social platform,” said Sun.

The paper explores why reviews drop in response to the monetary payments. The authors conjecture that the drop in reviews could be the result of community members’ concerns that their honest reviews – motivated by an intrinsic motive to either help others with relevant information or to present themselves as knowledgeable about the product or service – may now be interpreted by the community as simply driven by the less honourable extrinsic motivation of making money. If this were true, the drop in user reviews would be greater among users who had more friends on the social network, who could potentially misinterpret the user’s motivation for writing reviews.

The authors empirically test their conjecture by comparing the change in reviewing behaviour among “socialites” (more than five friends on network) against “loners” (no friends on network) after the introduction of the payment scheme. Indeed, the reviews from socialites drop 85 percent, from just over 0.4 reviews a month to just under 0.06 reviews a month. In contrast, the loners who had little to lose in terms of social capital increase their reviews from close to zero to about 0.03 per month. The increase in reviews from the loners however does little to offset the massive drop among the socialites, who are the heavy contributors overall. Hence the aggregate drop of 30 percent.

“Nobody wants to be seen as a paid shill for brands, so the users with more friends and followers, who were likely more influential and wrote more originally, are the ones who stop writing. A real double-whammy,” said Dong.

“Our results support the approach of industry leaders like Yelp or Amazon, who do not compensate for reviews. In fact, they tap into the intrinsic motive for social recognition through status badges for frequent contributors,” said McIntyre. “There may be still ‘under the radar’ ways to pay only the less socially active users for their reviews, but such targeting can be risky as the heavy reviewers may perceive it to be unfair and therefore stop writing reviews, if and when they learn about it.”

The complete paper is available here. 

 

A large number of reviews is not a reliable indicator of a product’s quality.

By MediaStreet Staff Writers

When we’re trying to decide which mobile phone case to buy or which hotel room to book, we often rely on the ratings and reviews of others to help us choose. But new research suggests that we tend to use this information in ways that can actually work to our disadvantage.

The findings, published in Psychological Science, indicate that people tend to favour a product that has more reviews, even when it has the same low rating as an alternative product.

“It’s extremely common for websites and apps to display the average score of a product along with the number of reviews. Our research suggests that, in some cases, people might take this information and make systematically bad decisions with it,” says researcher Derek Powell of Stanford University, lead author on the study.

“We found that people were biased toward choosing to purchase more popular products and that this sometimes led them to make very poor decisions,” he explains.

As opportunities to buy products and services online multiply, we have greater access than ever before to huge amounts of first-hand information about users’ experiences.

“We wanted to examine how people use this wealth of information when they make decisions, and specifically how they weigh information about other people’s decisions with information about the outcomes of those decisions,” says Powell.

Looking at actual products available on Amazon.com, Powell and colleagues Jingqi Yu (Indiana University Bloomington), Melissa DeWolf and Keith Holyoak (University of California, Los Angeles) found no relationship between the number of reviews a product had and its average rating. In other words, real-world data show that a large number of reviews is not a reliable indicator of a product’s quality.

With this in mind, the researchers wanted to see how people would actually use review and rating information when choosing a product. In one online experiment, 132 adult participants looked at a series of phone cases, presented in pairs. The participants saw an average user rating and total number of reviews for each phone case and indicated which case in each pair they would buy.

Across various combinations of average rating and number of reviews, participants routinely chose the option with more reviews. This bias was so strong that they often favoured the more-reviewed phone case even when both of the options had low ratings, effectively choosing the product that was, in statistical terms, more likely to be low quality.

A second online experiment that followed the same design and procedure produced similar results.

“By examining a large dataset of reviews from Amazon.com, we were able to build a statistical model of how people should choose products. We found that, faced with a choice between two low-scoring products, one with many reviews and one with few, the statistics say we should actually go for the product with few reviews, since there’s more of a chance it’s not really so bad,” Powell explains. “But participants in our studies did just the opposite: They went for the more popular product, despite the fact that they should’ve been even more certain it was of low quality.”

The researchers found that this pattern of results fit closely with a statistical model based on social inference. That is, people seem to use the number of reviews as shorthand for a product’s popularity, independent of the product’s average rating.

According to Powell, these findings have direct implications for both retailers and consumers:

“Consumers try to use information about other people’s experiences to make good choices, and retailers have an incentive to steer consumers toward products they will be satisfied with,” he says. “Our data suggest that retailers might need to rethink how reviews are presented and consumers might need to do more to educate themselves about how to use reviews to guide their choices.”