Tag Archives: ratings

How do you use User Ratings?

I was thinking about those user ratings on products again. I last posted here. Fascinating topic. We all make choices, how?

In one page they mentioned a comic that is so relevant, not only to the subject but also to the Hurricane Sandy, what a coincidence: TornadoGuard

Updates

  • Oct 12, 2013: On the “A Bayesian view of Amazon Resellers” blog post by John D. Cook the comments are very interesting. A lot of smart people in the world! Anyway, one commenter, Ian Maxwell, mentioned that one could use the Rule of succession for these kinds of problems.

Links

  1. Collective Choice: Rating Systems
  2. How do you rate user ratings?
  3. TornadoGuard: The Problem with Averaging Star Ratings
  4. 5 star ratings. Bayesian or Weighted average?
  5. How Not To Sort By Average Rating
  6. What is the Rating Average and how is it calculated
  7. Algorithm for Rating Objects Based on Amount of Votes and 5 Star Rating
  8. Rating Scale
  9. A Bayesian view of Amazon Resellers
  10. Bayesian average
  11. Brewing a Better Rating System
  12. DISCUSSION OF FUNCTIONAL DESIGN OPTIONS FOR
    ONLINE RATING SYSTEMS: A STATE-OF-THE-ART ANALYSIS
  13. Collaborative filtering
  14. ID3
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

How do you rate user ratings?

I’m puzzled by the shopping lists ranking views.  For example, at newegg.com I searched for a product, got the list, then I set the display to order by “Best Rating”.    Now I get a bunch of stuff and the top two items are

Item# #Reviews #Excellent #Good #Average #Poor #Very Poor
foo-1 2 1 1 0 0 0
foo-2 171 112 26 7 9 1

Doesn’t that seem odd? The first item is listed first, yet it only has two reviews, whereas the second item has a lot more reviews. True, the first item has no negative reviews.  Is that why it’s listed first?   Doesn’t sound correct to me.

I searched for an explanation on the site, but did not find one.   Yet, I don’t see an alternative.  My gut feeling is that the second one should be first.  It doesn’t have the best rating score, but it has the best rating responses so should be more accurate.  Isn’t this covered in Statistics 101?

I’m sure there are nice algorithms or frameworks to make this more useful.  Then again, maybe not. I’ve searched and I don’t find any definitive answers. Yet, there should be. How do people rate user ratings? Gut feel only?

Another example, I searched for a book on Amazon, the new Lee Child’s “A Wanted Man”. The user ratings were:

1 star 2 star 3 star 4 star 5 star
actual 174 184 268 312 462
Normalized 12.4 13.1 19.1 22.3 33

Just based on the ratings score, without reading the feedback, is this a “good” book? Note, here are a few statistics measures, though be wary, my statistics 101 was not recent:

Avg median mode var stddev
20 19 5 star 69.3 8.36

Using the SurveyMonkey computation we get a Rating Average of 3.5. This is a 4 Star rating. Seems the Survey Monkey approach is to the use the vote count of each each cardinal star as a weight. This gives an expected value computation.

Here are a few references on this that I hope to read one day:

Further reading

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.