This blog post is part of a series called Asked and Answered, about writing great survey questions and visualizing the results with high impact graphs. Dr. Sheila B. Robinson is authoring the Asked series, on writing great questions. Dr. Stephanie Evergreen is authoring the Answered series, on data visualization. View the Answered counterpart to this post on Dr. Evergreen’s website.

Asked & Answered rating scales featured image

Rating scale questions are ubiquitous. I can hardly imagine a survey without at least one. They are typically posed as multiple choice (one answer only) questions composed of a question stem, and a set of response options. Like this:

Question stem and response options

Rating scale questions are generally easy to compose but tricky to get just right. A question like the example above may yield some insight, but not actionable insight. We might know the proportion of customers who are satisfied or dissatisfied, but we wouldn’t know why or what to do about it. To get actionable insights, we need to ask a different question, or at least additional questions. But, before even considering what insights you need…

Do your homework!

The key to designing quality survey questions (of any type) is like mise en place for a chef – have everything ready and in its place before you begin. For your survey to yield the best data you will need to have these “ingredients” ready:

1.) One or more research or evaluation questions. I call these the “big” questions to distinguish them from the questions we ask on surveys or in interviews.

2.) A clearly articulated purpose for using a survey. You need a clear picture of how a survey will help answer the big questions.

3.) An understanding of what you need to measure to answer your big questions.

4.) Knowledge about your respondents. Have an idea of who they are, what they know, and what kinds of questions they are capable and willing to answer.

Rating scale questions often appear as Agree-Disagree or AD scales, or variations of AD scales such as the example above – using degrees of “satisfaction” instead of agreement. If we’re not using AD, we’re using IS or “item-specific” wording. Writing item-specific response options can be a bit more challenging and it’s easy to get tripped up by wording, but the upside is that these questions often prove even more informative and useful than AD scale questions.

Response options MUST match question stems!

If the question asks “how useful” something is, the response options (answer choices) should feature degrees of “usefulness.”

Example wrong and right way to match question stem and response options

Here are a few examples of mismatches from real surveys:

Examples of real survey questions with mismatched question stems and response options

Odds or Evens?

Rating scales typically feature four or five points, the difference being whether you choose to include a “neither agree nor disagree” or similar option as a midpoint. The “odds or evens” midpoint debate has been ongoing since the dawn of surveys and people on both sides of the issue feel very strongly about it.

Here’s the thing:

There is no right or wrong answer because it’s not an “either-or” debate.

The way to determine whether you need a midpoint is to think deeply about what you need to measure, and about your respondents. Given the topic you’re asking about, is it reasonable to think that someone may not have an opinion, that they genuinely feel “neutral” and have no leanings toward positive or negative? Then choose an odd number of response options (so that there is a midpoint).

Scenario #1: Someone once asked my opinion of my school superintendent. I truly had no feelings about her. She was new. I never had contact with her, didn’t have a sense of what was important to her, and didn’t know much about her past performance. I truly didn’t lean toward a positive or negative opinion.

Scenario #2: If I were to ask a set of respondents how satisfied or unsatisfied they were with the place they currently live, I would expect virtually all of them to have either positive or negative feelings, and so would choose a 4 or 6 point scale for that question, forcing them to choose a positive or negative leaning response.

How many scale points you use is dependent on your specific context. In general, 4-7 points work well, and up to 11 is fine too. Net promoter survey questions are wildly popular 11-point scales used by tons of companies and other entities.

N/A is NOT a Midpoint

Keep in mind that response options such as “don’t know” or “not applicable” are not midpoints. If it’s reasonable to think that some respondents may genuinely need these non-substantive options, they should be placed at the far end of the scale and visually separated from the other options so as to not create a visual midpoint.

Pro Tips:

  • Match question stem to responses
  • Offer 4-7 response options/scale points (but up to 11 is fine too)
  • Least desirable response first (this mitigates the possibility of the primacy effect- people choosing what they see first)
  • Odd or even number of scale points? (Depends on context!)

In the other posts in our Asked and Answered series, we provide options for Check All That Apply questions, Ranking questions, and Demographics.

See you soon.

We go into way more detail on these topics in our books. Dr. Sheila B. Robinson is co-author of Designing Quality Survey Questions. Dr. Stephanie Evergreen wrote Effective Data Visualization.

%d bloggers like this: