As is evidenced in recent posts co-authored with fellow blogger Kim Firth Leonard of actionable data (read them here and here), I’m fascinated with surveys and survey research. Just last week another fellow blogger, Brian Hoessler, of Strong Roots Consulting offered this post on open-ended questions.

I shared with Brian that I recently saw a needs assessment instrument composed of all open-ended questions – maybe a dozen or so questions in all. I always wonder when I encounter surveys with open-ended questions whether the qualitative data collected is indeed systematically analyzed and not just scanned or read through, especially in the case of very brief responses to open-ended questions. If data are analyzed, I wonder what kinds of coding strategies evaluators use – inductive or deductive? Constant comparison? Grounded theory? 

Brian responded to my comment with a concern about bias: “if we only give a cursory glance to open-ended question responses, we run the risk of all sorts of biases in making sense of the data.” I agree. In fact, confirmation bias is particularly pertinent to this conversation. If evaluators don’t systematically analyze qualitative data, how DO they control for bias?

When we simply read through narrative responses without a process for systematic analysis (i.e. coding, categorizing, theming), confirmation bias causes us to remember or focus on that with which we agree, or that which matches our internalized frameworks, understandings, or hypotheses. We  tend to remember or focus on responses in which strong feelings are expressed – the extremes, whether positive or negative. The danger is in making meaning from data without analysis, and basing decisions on those biased meanings.

I’ve spent years teaching university and professional development courses, all of which feature end-of-course evaluation surveys, and I spent most of those years just reading through feedback forms, constructing meaning from quick read-throughs. In fact, one semester, after teaching a new course to a particularly large group, my reviews weren’t stellar. I wallowed in self pity for a bit before doing what I knew I needed to do – make substantial changes to the syllabus and my teaching methods. Thankfully, I called a close friend and fellow professor to vent. “I did terrible!” I told her. “They hated the course!”

© Sheila B Robinson 2005

© Sheila B Robinson 2005

She asked me one simple question. “How many of them?”

“A lot!” I assured her.

“Look again. How many?”

Sure enough, after poring through with a more careful read and sorting forms into primitive categories – positive, neutral, and negative – I discovered what my friend suspected. Very few negative reviews after all. With an existing framework of self-doubt, bias bit me bad. I was ready to make sweeping changes to a course that was already successful for most, based on a small percentage of respondents’ views. It’s not that I don’t take into account these views; after all, there was certainly room for improvement, but having done even a minute level of data analysis, my decisions were better informed and more likely to be appropriate to effectively address student needs.

While this is hardly true “qualitative research” or “grounded theory” in action, it speaks to the importance of analysis and respect for respondents. Qualitative “…methodology enjoins taking with great seriousness the words and actions of the people studied” (Strauss & Corbin, 1998, p. 6).

Question type choices for surveys should be solidly based on information needs and feasibility of analysis. We should never collect data we don’t intend to analyze or use (i.e. resulting in respondent burden) in answering evaluation questions that ultimately lead to programmatic (or policy) decisions.

Strauss, A., & Corbin, J. (1998). Basics of Qualitative Research 2nd Ed. SAGE Publications.