Who hasn’t answered the question, “What did you learn?” after attending a professional development session? As a PD facilitator and evaluator, I’ve certainly used feedback forms with this very question. After all, measuring participant learning is fundamental to PD evaluation.
In this post, I’ll share examples of actual data from PD evaluation in which we asked the direct question, “What did you learn?” I’ll then explain why this is a difficult question for PD participants to answer, resulting in unhelpful data. Next, I’ll offer a potential solution in the form of a different set of questions for PD evaluators to use in exploring the construct of participant learning. Finally, I’ll show where participant learning fits into the bigger picture of PD evaluation.
What happens when we ask “What did you learn?”
Here are examples of actual participant responses to that question:
- After a session on collaborative problem solving with students with behavioral difficulties: How to more effectively problem solve with students
- After a session on co-teaching: Ways to divide up classroom responsibilities
- After a session on teaching struggling learners: Some new strategies to work with struggling students
In my experience, about one-third to one-half of participant responses to that ubiquitous question are nothing more than restatements of the course title and thus similarly uninformative to an evaluator.
On the futility of asking “What did you learn?”
It’s challenging to get people to clearly articulate what they have learned on a feedback form distributed after a professional development session. Whether the question is asked immediately after the learning has taken place, or after some time has passed and the participant (in theory) has had time to process and apply the learning, the outcome (in terms of the data collected) is the same. People don’t seem to be able (I’m working under the assumption that they are indeed willing) to answer “What did you learn?” with the depth and richness of written language that would help inform professional learning planners make effective decisions about future programming. They’re not to blame, of course. It’s just as difficult for me to answer that question when I’m a participant.
Parents and teachers know this: When you ask a child a question and he or she answers with “I don’t know,” that response can have a whole range of meanings from “I can’t quite articulate the answer you’re looking for” to “I’m not certain I know the answer” to “I need more time to process the question” to “I don’t understand the question” to “I really don’t know the answer” to “I don’t want to tell you!” It’s no different for adults. Someone who answers “What did you learn?” by essentially restating the title of the PD session is in effect saying, “I don’t know.” As a PD evaluator, it is my job to figure out exactly what that means.
How else can we know what participants learned?
Of course, we’re talking about surveys – self-reported perceptions of learning. There are certainly other ways for evaluators to gain an understanding of what participants learned.
We can interview them, crafting probes that might help them more clearly articulate what they learned. Interviews include dedicated time and the opportunity for participants to give full attention to the question. In contrast, surveys are often completed when participants feel rushed at the end of a PD session, or at a later time, when they are fitting survey completion in with a myriad of other job duties.
We can observe participants at work, looking for evidence that they are applying what they learned in practice, thus getting at not only what they may have learned, but also what Kirkpatrick called “behavior” and Guskey calls “participant use of new knowledge and skills” (see below for more on these evaluators and their prescribed levels of PD evaluation).
Both interviews and observations, however, are considerably more time consuming and thus less feasible for an individual evaluator.
As an alternative, I wondered what might happen if rather than asking, “What did you learn?” we asked, “How did you feel?” Learning has long been highly associated with emotions. (For more on learning and emotions, check out this article, this one, and this one, and look at the work of John M. Dirkx and Antonio Damasio, among many others.) Would PD participants be better able to articulate how they felt during PD, and would their learning then become evident in their writing?
What happens when we ask a different question?
Well, a different set of questions, really. A colleague and I created a new feedback form to pilot with PD participants in which we seek to understand their learning through a series of five questions. We discussed at length what it is we want to know about participants’ learning to inform our programmatic decisions. We concluded that it is not necessarily the content (i.e. if participants attend a course on instructional planning, then we expect they will learn something about instructional planning), but whether participants experience a change in thinking, feel they have learned a great deal, and whether or not the content is new for them.
We begin with these three questions using 5-point standard Likert response options (Strongly disagree, disagree, neither agree nor disagree, agree, strongly agree):
- This professional learning opportunity changed the way I think about this topic.
- I feel as if I have learned a great deal from participating in this professional learning opportunity.
- Most or all of the content was a review or refresher for me (this question is reverse-coded, of course).
We then ask participants about their emotions during the session with a set of “check all that apply” responses:
During this session I felt:
- Energized
- Renewed
- Bored
- Inspired
- Overwhelmed
- Angry
- In agreement with the presenter
- In disagreement with the presenter
- Other
Finally, we ask participants to “Please explain why you checked the boxes you did,” and include an open essay box for narrative responses.
I’ve only seen data from one course thus far, but it is quite promising in that participants were very forthcoming in their descriptions of how they felt. Through their descriptions we were able to discern the degree of learning and from many responses, how participants plan to apply that learning. We received far fewer uninformative responses than in previous attempts to measure learning with the one direct question. As we continue to use this new set of questions, I hope to share response examples in a future post.
Where does participant learning fit into the PD evaluation picture?
Donald Kirkpatrick famously proposed four levels of evaluation of training or professional development – essentially measuring participants’ 1.) reactions, 2.) learning, 3.) behavior, and 4.) results – for training programs in the 1950s. Thomas Guskey later built upon Kirkpatrick’s model, adding a fifth level – organizational support and learning (Guskey actually identifies this as level 3; For more on this topic, see this aea365 post I wrote with a colleague during a week devoted to Guskey’s levels of PD evaluation sponsored by a professional development community of practice).
For hardcore evaluation enthusiasts, I suggest Michael Scriven’s The Evaluation of Training: A Checklist Approach.
What other questions could we ask to understand PD participant’s learning?
I welcome your suggestions, so please add them to the comments!
Thanks for a thoughtful post, Sheila. And as Craig mentioned, the comments have been equally informative. Like Chad, I have started using the pre-post question to my feedback forms. Recently I started asking how participants plan to use what they learned in their work, and if they plan to share the information with colleagues. I use the sharing with colleagues question as an indicator of the perceived usefulness, and the open-ended how they plan to use the information provides insights into the ways in which they found the information useful.
Thanks Jean! We do often ask “How will you use what you have learned?” and after getting few thoughtful responses to that, we started providing response options. People willingly check those options, but I’m not certain their answers have been helpful to us in our context. We may want to consider the question about IF they plan to use what they have learned in their work instead of making the presupposition that they will! 🙂
Great examples here, Sheila! And the comments have been helpful too, so thanks for raising the topic.
I love the idea of asking how people felt, and supplying a checklist of emotions like “bored” and “inspired”. As those are even quicker to answer than Likert questions (and are even quite fun!), I imagine you get very good response rates there.
One addition might be to ask, in effect, “What didn’t you learn?” In other words, what unanswered question(s) do people have at the end of the session?
Please also see my own post on feedback questions, which is a quick read.
Thanks Craig! “What didn’t you learn?” is an interesting question and I’ll consider how I might be able to use that in my work. We often ask “What questions do you still have?” and in my experience, we don’t often get answers to that.
Hi Sheila-
Thanks for the post! I like how your framed this topic and will share it with othersin my organization. This is something I have been thinking about for a while. It is helpful to read your thoughts and how you have been implementing them.
For the past four years, I have evaluate a large annual conference every for a network of professionals who have child trauma grants. More than a typical conference, this event is designed to support traditional and social learning, as well as faciliate collaboration between grantees. I avoid the direct questions anymore as I felt that I was getting quite a bit of bias (I just spent $1500 and 3 days of my time – of course it was wonderful!!!) Instead, I use both closed and open-ended questions to explore the value the conference and what apepars to result from it. In fact, I try to avoid the use of the word “why” as much as I can. A few things I have found to be very useful:
I ask the participants to check off, from a long list of options, experiences that they had during the conference such as “had my thinking challenged” , “developed new collaborative relationships”, “shared my experiences with another person”. This gives an idea of the depth and breadth of experiences.
I also ask them to narrow down the primary result from attending the conference, checking one answer from a group of statements that include “I have new ideas fot meet our/my current challeneges”. ” My technical/clinical competency increased”; “I developed valubale relationships and access to member expertise”.
Finally, I ask them to share with me a meaningful expereince that happened during the conference and how it made a difference personnally or professionally.
This evaluation takes a fair amount of work, however the results have been amazing. This combination of questions has helped us to learn about the value of the conference and the key role it plays within the network. We have a much clearer idea of what is getting developed as a result of the conference and we have even been able to identify some emergent issues that are important to our mission.
Wow, great comments and ideas Lisa! Thanks so much for sharing these. I appreciate how you outline the purpose for asking each question. That certainly helps focus the data collection and analysis, and directly leads to how you use your results. My challenge at the moment, as I mentioned in a reply to Chad, is that I’m working with a form that must serve multiple events of different types for different audiences. I’m trying to balance the need for good data with the feasibility issue of not being able to create customized forms for each event and audience. As you have suggested, I’ve had better outcomes (with regard to getting richer data) using statements that participants can respond to, vs. open-ended questions. It’s about figuring out which statements I can use that will be relevant for all, yet generic enough to serve all, yet robust enough to yield good data! 🙂
Thanks for this post. We are going to put your questions in our next survey of a training we did around creating student learning objectives. I’ll let you know how it turns out. 🙂
Thanks Mya! Please do let me know how it goes with your data collection. Be sure to read Susan’s, Chad’s, and Carolyn’s comments as well for additional perspectives and ideas.
In addition to those standard Likert-response questions I include retrospective-type questions as well such as:
1) Before this workshop, my knowledge of classroom management within the PBIS framework was: Excellent Good Average Below average Poor
2) After this workshop, my knowledge of classroom management within the PBIS framework is:
Excellent Good Average Below average Poor
Great idea Chad! I like using retrospective post-then-pre questions. In this case, we are using a form that must serve an array of professional learning sessions that range from one day workshops, to multi-session courses on a wide variety of topics. Each question on our feedback form must be generic enough to serve all of those courses. We could certainly substitute “this topic” for “PBIS Framework” and try that question.
Sheila, this is a timely piece as I have been, in collaboration with colleagues thinking about which questions get at the most effective feedback and data to inform our next steps and show evidence of adult and student learning.
Thank you for sharing your work. We have been using a “T” Chart model after professional learning experiences with the intended learning outcomes across the top of the “T”. The left side of the “T” has an exclamation mark “!” with the phrase, “As a result of this learning experience, I feel more confident about…” The right side of the “T” has a question mark “?” with the phrase, “I still have questions about…”
I wonder if this might help Sue’s example where three different folks were attending the same learning with very different mind and skill sets?
Thank you for your blog Sheila! As always, I get so much out of thinking about your words and applying them to my work.
Fondly,
Carolyn
Thanks so much for your comment Carolyn, and for your kind words! I like the idea of using sentence starters. I do think it facilitates participants’ ability to write about their experiences in professional learning. I’m somewhat concerned about the presupposition that people necessarily feel more confident about something, but perhaps you could ask them prior to that IF they feel more confident and then have them complete the sentence if they answer in the affirmative. Do you feel you have been getting rich, informative data from these two sentence starters?
Sheila, thanks for helping me to think about PD eval. When I worked with AEA I was always trying to find the right mix of short and informative for PD eval.
One question though, what are you getting in response to “This professional learning opportunity changed the way I think about this topic.” I may be reading this wrong (as all too often seems to be the case) but I wouldn’t know if this is a good thing or a bad thing?
Here’s what I’m thinking. I’ll use dataviz as an example since I just taught a workshop on dataviz and I’ll throw in a couple of hypothetical attendees:
1. Attendee A came in excited about dataviz and hopeful about using it in her job. She left feeling that it was out of her reach and she needed more technical skills.
2. Attendee B came in hating dataviz and feeling it was a fad (he’d been sent by his employer perhaps to attend), he left ready to apply it to a project he was working on and feeling it was a valuable tool
3. Attendee C came in fairly competent in dataviz and left with more nuanced skills, but with the same basic thoughts/attitudes toward the topic.
Perhaps the question is one that must be fleshed out in the qualitative answers, but did you find that people systematically went through and explained each response qualitatively? Otherwise the quant response would be difficult to interpret.
Having said that, I’m excited to see the direction you are going and look forward to learning what you learn.
Very appreciatively, Susan
Thanks so much for your comments, Susan! You are great at pushing my thinking! You’re right about the “change in thinking” question, and I think there is more work to be done there, perhaps with follow up questions added to the mix. We struggle to balance our need for rich data with the obligation to limit respondent burden, and so find ourselves in a constant state of prioritizing our information needs and thus adding and subtracting questions.