With a properly-framed question, finding an elegant answer becomes almost straightforward.
-Stephen Wunker, Asking the Right Question
Here’s what every successful leader, coach, marketer, or teacher knows: You cannot understate the importance of asking the right questions.
It’s no different for evaluators.
Evaluation questions form the foundation of solid evaluations, while also serving to frame and focus the work. Starting with the right questions can impact the effectiveness and appropriateness of key programmatic decisions that rest on evaluation results. In fact, clearly defined evaluation questions are part and parcel to a successful evaluation – that is, one that is actionable and is used.
That said, evaluations are carried out all the time without the benefit of well-articulated, focused questions. How often have you been asked to “evaluate the program,” or “study the program” to “find out whether it’s working” or “to see if it’s effective?”
Taken at face value these are perfectly legitimate questions, but they have no practical utility until “working” or “effective” are clearly defined – until the questions are sufficiently focused as to be able to inform and direct the evaluation design. What does “working” look like? How will you recognize “effectiveness”? When I pushed to get a definition of “working” before beginning an evaluation, the client asked me, “whether the program has made a difference” for the target population. “What kind of difference?” I asked. It took s great deal of conversation before we were able to settle on what exactly that meant.
This post explores evaluation questions from three angles:
- Understanding the nature of evaluation questions
- Three critical functions of evaluation questions
- Considerations for crafting quality evaluation questions
Understanding the nature of evaluation questions
Evaluation questions are the broad, overarching, or “big picture” questions an evaluation is expected to answer. If answered and actionable, they help us (or help us support those who) make key programmatic decisions. They are distinct from the individual questions asked on measurement instruments such as surveys or interview protocols, although they can often overlap.
Evaluations often include descriptive questions such as:
- To what extent was the program implemented as designed?
- How many of the target population did we reach?
- Did the program meet its stated goals?
- What outcomes were achieved?
However, it is also important to ask “explicitly evaluative questions” (for a detailed discussion of these, read actionable evaluation basics by E. Jane Davidson)
- How well was our program implemented?
- How adequate was our reach? Did we reach the right target population?
- How important were the outcomes? How valuable are they?
- How substantial was the change we observed?
You can see the words in italics that signify the question as evaluative. Other words such as meaningful, significant, or appropriate might be found in evaluative questions as well.
Three critical functions of evaluation questions:
Just as a building’s foundation functions to bear the load of the building, anchor it against potentially damaging natural forces (i.e.earthquakes), and shield it from (also potentially damaging) moisture, evaluation questions can be thought to have three similar functions:
1.) They bear the load of the evaluation. The evaluation approach and social science theories that inform your approach, along with choices about evaluation design and selection of measures rest squarely on the evaluation questions. These questions set the purpose for the entire evaluation.
2.) They anchor the evaluation again potentially damaging “forces.” What could potentially damage an evaluation? Looking for the wrong indicators (i.e. those most readily observable), selecting the wrong measures (i.e. the most readily available, cheapest, easiest to administer), collecting the wrong data, engaging the wrong stakeholders (i.e. those easiest to access), sampling the wrong respondents…You get the picture. Leveraging evaluation questions as the anchor lends critical purpose to all choic
es you make as you craft the evaluation.
3.) They shield the evaluation from that which can seep in slowly and destroy it. Distrust, disdain, fear, misplaced expectations. These insidious dysfunctional attitudes towards evaluation can fester and erupt at any time in the evaluate life cycle. Clearly articulated questions give the evaluator the ability to defend against these and the potential to address them productively.
Considerations for crafting quality evaluation questions
There’s no dearth of good advice available. Here’s some I’ve assembled over the years:
Considerations for developing evaluation questions:
- What are the information needs?
- Whose information needs are going to be considered?
- What do you need to know about the program for the purpose of the evaluation?
- What do you need to know in order to make (or support others to make) necessary decisions about the program?
- Will evaluation questions be determined by the evaluator, program personnel, other stakeholders, etc.? Will they be developed collaboratively?
Community Toolbox offers this:
You choose your evaluation questions by analyzing the community problem or issue you’re addressing, and deciding how you want to affect it. Why do you want to ask this particular question in relation to your evaluation? What is it about the issue that is the most pressing to change? What indicators will tell you whether that change is taking place?
The venerable CDC describes their strategy to help evaluators and offers a checklist to assess your evaluation questions: To help get to “good questions” we aggregated and analyzed evaluation literature and solicited practice wisdom from dozens of evaluators. From these efforts we created a checklist for use in assessing potential evaluation questions.
Better Evaluation offers this advice: Having an agreed set of Key Evaluation Questions (KEQs) makes it easier to decide what data to collect, how to analyze it, and how to report it and links to additional resources for crafting key evaluation questions here.
Sure, “detailed questions are not as exciting as brilliant answers,” claims Wunker. But you’ll never get brilliant answers without them, says Sheila.
Image credits: gak, play4smee, existentialism, and patterned via Flickr.