Who hasn’t answered the question, “What did you learn?” after attending a professional development session? As a PD facilitator and evaluator, I’ve certainly used feedback forms with this very question. After all, measuring participant learning is fundamental to PD evaluation.

In this post, I’ll share examples of actual data from PD evaluation in which we asked the direct question, “What did you learn?” I’ll then explain why this is a difficult question for PD participants to answer, resulting in unhelpful data. Next, I’ll offer a potential solution in the form of a different set of questions for PD evaluators to use in exploring the construct of participant learning. Finally, I’ll show where participant learning fits into the bigger picture of PD evaluation. 

What happens when we ask “What did you learn?” 

Here are examples of actual participant responses to that question:

  • After a session on collaborative problem solving with students with behavioral difficulties: How to more effectively problem solve with students
  • After a session on co-teaching: Ways to divide up classroom responsibilities
  • After a session on teaching struggling learners: Some new strategies to work with struggling students

In my experience, about one-third to one-half of participant responses to that ubiquitous question are nothing more than restatements of the course title and thus similarly uninformative to an evaluator.

On the futility of asking “What did you learn?”

It’s challenging to get people to clearly articulate what they have learned on a feedback form distributed after a professional development session. Whether the question is asked immediately after the learning has taken place, or after some time has passed and the participant (in theory) has had time to process and apply the learning, the outcome (in terms of the data collected) is the same. People don’t seem to be able (I’m working under the assumption that they are indeed willing) to answer “What did you learn?” with the depth and richness of written language that would help inform professional learning planners make effective decisions about future programming. They’re not to blame, of course. It’s just as difficult for me to answer that question when I’m a participant.

Parents and teachers know this: When you ask a child a question and he or she answers with “I don’t know,” that response can have a whole range of meanings from “I can’t quite articulate the answer you’re looking for” to “I’m not certain I know the answer” to “I need more time to process the question” to “I don’t understand the question” to “I really don’t know the answer” to “I don’t want to tell you!” It’s no different for adults. Someone who answers “What did you learn?” by essentially restating the title of the PD session is in effect saying, “I don’t know.” As a PD evaluator, it is my job to figure out exactly what that means.

How else can we know what participants learned?

Of course, we’re talking about surveys – self-reported perceptions of learning. There are certainly other ways for evaluators to gain an understanding of what participants learned.

We can interview them, crafting probes that might help them more clearly articulate what they learned. Interviews include dedicated time and the opportunity for participants to give full attention to the question. In contrast, surveys are often completed when participants feel rushed at the end of a PD session, or at a later time, when they are fitting survey completion in with a myriad of other job duties.

We can observe participants at work, looking for evidence that they are applying what they learned in practice, thus getting at not only what they may have learned, but also what Kirkpatrick called “behavior” and Guskey calls “participant use of new knowledge and skills” (see below for more on these evaluators and their prescribed levels of PD evaluation).

Both interviews and observations, however, are considerably more time consuming and thus less feasible for an individual evaluator.

As an alternative, I wondered what might happen if rather than asking, “What did you learn?” we asked, “How did you feel?” Learning has long been highly associated with emotions. (For more on learning and emotions, check out this article, this one, and this one, and look at the work of John M. Dirkx and Antonio Damasio, among many others.) Would PD participants be better able to articulate how they felt during PD, and would their learning then become evident in their writing?

What happens when we ask a different question?

Well, a different set of questions, really. A colleague and I created a new feedback form to pilot with PD participants in which we seek to understand their learning through a series of five questions. We discussed at length what it is we want to know about participants’ learning to inform our programmatic decisions. We concluded that it is not necessarily the content  (i.e. if participants attend a course on instructional planning, then we expect they will learn something about instructional planning), but whether participants experience a change in thinking, feel they have learned a great deal, and whether or not the content is new for them.

We begin with these three questions using 5-point standard Likert response options (Strongly disagree, disagree, neither agree nor disagree, agree, strongly agree):

  1. This professional learning opportunity changed the way I think about this topic.
  2. I feel as if I have learned a great deal from participating in this professional learning opportunity.
  3. Most or all of the content was a review or refresher for me (this question is reverse-coded, of course).

We then ask participants about their emotions during the session with a set of “check all that apply” responses:

During this session I felt:

  • Energized
  • Renewed
  • Bored
  • Inspired
  • Overwhelmed
  • Angry
  • In agreement with the presenter
  • In disagreement with the presenter
  • Other

Finally, we ask participants to “Please explain why you checked the boxes you did,” and include an open essay box for narrative responses.

I’ve only seen data from one course thus far, but it is quite promising in that participants were very forthcoming in their descriptions of how they felt. Through their descriptions we were able to discern the degree of learning and from many responses, how participants plan to apply that learning. We received far fewer uninformative responses than in previous attempts to measure learning with the one direct question. As we continue to use this new set of questions, I hope to share response examples in a future post.

"We learn more by looking for the answer to a question and not finding it...than we do from learning the answer itself." -Lloyd Alexander

Image Credit: Collette Cassinelli via Flickr

Where does participant learning fit into the PD evaluation picture?

Donald Kirkpatrick famously proposed four levels of evaluation of training or professional development – essentially measuring participants’ 1.) reactions, 2.) learning, 3.) behavior, and 4.) results –  for training programs in the 1950s. Thomas Guskey later built upon Kirkpatrick’s model, adding a fifth level – organizational support and learning (Guskey actually identifies this as level 3; For more on this topic, see this aea365 post I wrote with a colleague during a week devoted to Guskey’s levels of PD evaluation sponsored by a professional development community of practice).

For hardcore evaluation enthusiasts, I suggest Michael Scriven’s The Evaluation of Training: A Checklist Approach

What other questions could we ask to understand PD participant’s learning?

I welcome your suggestions, so please add them to the comments!

 

%d bloggers like this: