A student’s “average” level of satisfaction with their teachers could be “very satisfied” or “very dissatisfied” or anywhere in between. A consumer’s “average” experience with their telecom provider could be “excellent” or “very poor” or anywhere in between.
“Average” belongs on a Comparison Scale, not a Rating Scale.
- A rating scale gives absolute values (e.g., excellent, satisfactory, poor, etc.)
- A ranking scale is comparative (e.g., average, superior, worst than most, etc.)
- In some contexts, “average” may be “excellent”, while other contexts “average” may be “bad”.
- For example,
-
- If I’m thinking about may favourite online store, then my average experience is very good.
- If I’m thinking about my internet providers, then my average experience is bad.
-
Example:
How would you describe your experience the last time you went to a movie theatre?
.
Rating Scale example:
- Excellent
- Very good
- Good
- Fair
- Poor
Comparison Scale example 1:
- Much better than usual
- A little better than usual
- Average / About the same as usual
- A little worse than usual
- Much worse than usual
Comparison Scale example 2
- Far above average
- A little above average
- Average
- A little below average
- Far below average
Real World Example
The erroneous headline “One-In-Four Satisfied Customers Don’t Come Back” from a Forbes Magazine article is partially the result of a wonky mix of a ratings and rankings in one response scale.
“If you were to rate a customer experience on a scale of 1 to 5 – where 1 is bad, 2 is fair, 3 is average or satisfactory, 4 is good, and 5 is excellent – how likely is it that you would return to this company or brand if you rated them a 3?” [Never, Not Likely, Not Sure, Likely and Very Likely]
There are a number of issues with both this question and with the survey response scale. Here we’ll focus just on the mix of ratings and ranking in the response scale.
Satisfactory customer service is treated as a synonym for customer satisfaction, although it isn’t. A lot of factors go into customer satisfaction beyond customer service.
Another glaring issue is that the article author has given respondents two options for the scale mid-point, “average or satisfactory”, which further compounds the error. Respondents were informed that a ‘3’ response means either “average” or “satisfactory.” These two words have different meanings and it is not possible to know which word a respondent was thinking about when they responded to the question, or whether both words are applicable to them.
It is most unusual, but the author states openly that he has a bias: “First, let’s define satisfactory. I promote that satisfactory is simply average.” Promote it or not, a satisfaction rating and an average ranking are two different things.
The remarkable headline is not supported by the survey data. And yet, if taken at face value, it could lead organizations to divert people and budgetary resources to fixing a non-issue, or at least, to an issue of unknown magnitude and importance.
Elevate Your Game. Because Accuracy Matters.
To get accurate responses, the survey response rating categories could have looked something like this unipolar scale:
- Very Bad
- Bad
- Fair
- Good
- Very good
Although I would have preferred this bipolar Likert rating scale as it has more nuance and no mid-point:
- Very dissatisfied
- Somewhat dissatisfied
- A little dissatisfied
- A little satisfied
- Somewhat satisfied
- Very satisfied
Quick Summary
- Be aware of mixing ratings and rankings in one scale (absolute measures vs. comparative measures)
- Be aware that survey findings should be reported in a manner that reflects the actual survey question wording as closely as possible.