A recent experience reviewing a professional organization’s conference proposals for professional development sessions reminded me of the challenge program designers/facilitators encounter in identifying and articulating program outcomes. Time after time, I read “outcome” statements such as: participants will view video cases…, participants will hear how we approached…, participants will have hands-on opportunities to…, participants will experience/explore…and so on. What are these statements describing? Program activities. Program outputs. What are they not describing? Outcomes.Â
OUTPUTS  ≠ OUTCOMES
OUTPUTS are the products of program activities, or the result of program processes. They are the deliverables. Some even use the term interchangeably with “activities.” Outputs can be identified by answering questions such as:
- What will the program produce?
- What will the program accomplish?
- What activities will be completed?
- How many participants will the program serve?
- How many sessions will be held?
- What will program participants receive?
OUTCOMES are changes in program participants or recipients (aka the target population). They can be identified by answering the question:
How will program participants change as a result of their participation in the program?Â
In other words, What will program participants know, understand, acquire, or be able to do?
Change can occur for participants in the form of new or different levels of:
- awareness
- understanding
- learning
- knowledge
- skills
- abilities
- behaviors
- attributes
When I teach graduate students to compose outcome statements, I ask them to visualize program participants and think about what those participants will walk out of the door with (after the program ends) that they did not enter the room with.
An important distinction is that we can directly control outputs, but not outcomes. We can control how many sessions we hold, how many people we accept into the program, how many brochures we produce, etc. However, we can only hope to influence outcomes.
Let me offer a simple example: I recently took a cooking course. I was in a large kitchen with a dozen stations equipped with recipes, ingredients, tools, and an oven/range. A chef gave a lecture, and demonstrated the steps listed on our instruction sheets while we watched. We then went about cooking and enjoying our meals. As I left the facility, I realized that for the first time, I actually understand the difference among types of flour. Yes, I can explain the reasons you may choose cake flour, all-purpose flour, or bread flour. I know how much wheat flour I can safely substitute for white, and why. I can describe the function of gluten. I can recognize when active dry yeast is alive and well (or not). I am able to bake a homemade pizza in such a way that the crust is browned and crispy on the outside, yet moist on the inside. I could do none of these things prior to entering that cooking classroom. If the program included an assessment of outcomes (beyond the obvious “performance assessment” of making pizza), I’m certain I could provide evidence that the cooking course is effective.
However, if the sponsoring organization proposed this course including a list of “outcomes,” what might they be? See if this sounds familiar: Participants will hear a chef’s lecture on flour types and the function of gluten, and will see a demonstration of the pizza-making process. Participants will have the opportunity to make enough homemade dough to bake two 12″ pizzas with toppings of their choice and will use a modified pizza stone baking process.
We know now that those are indeed potential outputs, or program activities. If program personnel decide to evaluate this course based on the articulation of these “outcomes” (which, of course, are not outcomes), what might that look like?
Did participants hear a lecture? Check. Did they see a demonstration? Check. Did they make dough and bake pizzas? Check. Success??? Perhaps. But what if those responsible for the program want to improve it? Expand it? Offer more courses and hope that folks will register for them? Will knowing the answer to these questions help them? Probably not.
To be sure, outputs can be valuable to measure, but they don’t go far enough in helping us answer the types of evaluative questions (e.g. To what degree did participants change? How valuable or important is that degree of change? How well did the program do with regard to facilitating change in participants?) we need to make programmatic decisions for continuous improvement.
Outcomes are much more difficult to identify and articulate, but well worth the effort, and best done during the program design and evaluation planning process.
For more on outcomes vs. outputs, check out these links:
More about outcomes – why they are important…and elusive! (Sparks for Change blog)
It’s Not Just Semantics: Managing Outcomes vs. Outputs (Harvard Business Review)
Management is for programs, leadership is for people.
Agreed! 🙂
Great post, Sheila. Now I want to take a cooking class so I can use a pre-test/post-test design complete with pictures and survey to accurately gauge the outcomes. Pizza outcomes could become as well known as the chocolate chip cookie evaluation exercise by Hallie Preskill and Darlene Russ-Eft!
I love the chocolate chip cookie evaluation exercise! I use that every semester with my Program Evaluation students.
I love this, Sheila — Outputs/Outcomes…Tricks are for kids….Rewriting grant RFP instructions and will refer here. Thanks! Tom
Exactly what I was thinking Tom! Silly rabbit…
This is a nice way of putting it, Sheila!
Thanks Chi! 🙂
Great post Sheila! Loved the pizza example. We wrote a post last week about M&E planning traps and one of them was focusing efforts on measuring outputs and activities and neglecting to measure outcomes. Someone asked us for some examples so I will refer them to your pizza story 🙂
Thank you Cameron! Isn’t is wonderful when you don’t even have to make things up? 🙂 I enjoyed your post as well! I especially like the section on Trap #2: Missing the Middle, and the way in which you identify “sexy high level outcomes” and “low hanging fruits (outputs and activities).
Those of you following these comments can read “5 common monitoring and evaluation planning traps and how to avoid them” here: http://www.clearhorizon.com.au/discussion/5-common-monitoring-and-evaluation-planning-traps/
Thanks for sharing our post Sheila and I am glad you got something out of it.