When Kim Firth Leonard of the actionable data blog and I write together, we usually refer to each other with a superlative – fabulous, magnificent, wonderful, etc. (all totally accurate, of course) – but now, I’m even prouder to call her co-author! Yes, we are in the throes of writing a book on survey design!
After a very successful presentation to a packed room at Evaluation 2014 in Denver, CO (if you were there, thanks!) we met with an editor at Sage Publications to pitch our idea and now we’re busy fleshing out chapters and excited to share bits with our readers along the way.
The foundation of our collaborative work lies here: “how evaluators ask a question can dramatically influence the answers they receive” (Schwarz & Oyserman, 2001, p. 128).
Our common interest has always been in question design and development, and our work together is grounded in a movement called The Cognitive Aspects of Survey Methodology (CASM). CASM began with a meeting of the minds – literally! Survey researchers and cognitive scientists convened in 1983 for some rich dialogue and idea exchange on how their worlds could collide and result in some collaborative research. It was groundbreaking in that while one might assume that these two disciplines were closely aligned for years (after all, survey researchers understood as early as the 1940s that question wording could impact responses) in fact, they were not.
The cognitive sciences are concerned with the study of such processes as understanding language, remembering and forgetting, perception, judgment, and inferring causes. Because all of these and other cognitive processes are important in survey research interviews, it would not be surprising to find a fairly long history of collaboration between cognitive scientists and survey researchers in problems of mutual interest. Strangely enough, however, until a few years ago members of the two disciplines appear to have had little contact (Jabine, Straf, Tanur & Tourangeau, 1984, p. 1).
The idea during the seminar was that each discipline could potentially contribute to the other. Survey researchers could understand more about the cognitive tasks their questions present to respondents, and cognitive scientists could better understand how surveys could aid in their research, which, up until this time, relied heavily on laboratory experiments.
Both groups understandably wanted to deeply investigate the potential uses and limitations of self-reported data, and they identified key issues such as how memory is organized and how respondents make estimates and judgements when answering questions about behavior.
A wealth of rich literature has emerged especially in the last two decades – a prodigious product of this beautiful relationship. One of our favorite pieces is Asking Questions About Behavior: Cognition, Communication, and Questionnaire Construction, a meta-synthesis by Schwarz and Oyserman of some of this body of research. Here’s the controlling idea that really catalyzed our interest in quality question design:
In posing such questions, researchers implicitly hope that participants will
(1) understand the question,
(2) identify the behavior of interest, and
(3) retrieve relevant instances of the behavior from memory…
(4) correctly identify the relevant reference period (e.g., “last month”),
(5) search this reference period to retrieve all relevant instances of the behavior,
(6) correctly date the recalled instances to determine whether they fall within the reference period, and
(7) correctly add up all instances of the behavior to arrive at a frequency report…
(8) map this frequency onto the response alternatives provided by the researcher… [and]
(9) candidly provide the result of their recall effort to the interviewer.
Implicit in these—rarely articulated—hopes is the assumption that people know what they do and can report on their behavior with candor and accuracy, although they may not always be willing to do so. From this perspective, the evaluator’s key task is to ask clear questions about meaningful behaviors in a setting that allows for candid reports [our emphasis] (Schwarz & Oyserman, 2001, p. 129).
Whew! That was a lot of work just reading that list! Now think about what a respondent experiences when you present them with a 175 question survey!
As we continue this journey of responding to Schwarz and Oyserman’s call to action by writing and presenting about quality question design, won’t you join the conversation? We’d love to know your favorite resources, tips, tricks, and advice for designing surveys. We’d also love to meet like-minded researchers on social media or better yet, in person!
Please join us at our Evaluation 2015 skill-building session – Crafting Quality Questions: The Art & Science of Survey Design in Chicago!
Subscribe to this blog to receive future posts on survey design including our work-in-progress, a checklist of survey design principles. Want to check out our earlier posts on survey design? Click here.