Why Are Evaluators So Tentative about the Advocacy Aspect of Our Profession? (Guest Post by Rakesh Mohan)

Rakesh Mohan, Idaho Legislature Office of Performance Evaluations

Rakesh Mohan

I’ve recently had the pleasure of meeting an evaluator whose work I’ve followed, and I invited him to write for Evaluspheric Perceptions. Rakesh Mohan has been a regular on EvalTalk, and I’ve admired him for putting his work out there and asking for feedback. In early 2014, his office released a report, Confinement of Juvenile Offenders, and I found myself curious enough to read it. Quite honestly, it wasn’t the topic that interested me, but rather, it was the idea of reading a governmental evaluation report produced by someone who is a great fan of and frequent presenter at the American Evaluation Association. Needless to say, the report impressed me to no end. Rakesh put into place nearly every principle of good evaluation reporting and data visualization that I have been learning and studying myself. I’m not the only one impressed by the work coming out of his office. In 2011, they received AEA’s Alva and Gunnar Myrdal Government Evaluation Award. Recent posts on two of my favorite blogs highlight the work of his office as well. One can be found at Better EvaluationWeek 15: Fitting reporting methods to evaluation findings – and audiences and the other at AEA365Sankey diagrams: A cool tool for explaining the complex flow of resources in large organizations(more…)

Outputs are for programs. Outcomes are for people.

A recent experience reviewing a professional organization’s conference proposals for professional development sessions reminded me of the challenge program designers/facilitators encounter in identifying and articulating program outcomes. Time after time, I read “outcome” statements such as: participants will view video cases…, participants will hear how we approached…, participants will have hands-on opportunities to…, participants will experience/explore…and so on. What are these statements describing? Program activities. Program outputs. What are they not describing? Outcomes.  (more…)

Ask a Brilliant Question, Get an Elegant Answer?

With a properly-framed question, finding an elegant answer becomes almost straightforward.

-Stephen Wunker, Asking the Right Question

Here’s what every successful leader, coach, marketer, or teacher knows: You cannot understate the importance of asking the right questions.

It’s no different for evaluators.

Evaluation questions form the foundation of solid evaluations, while also serving to frame and focus the work. Starting with the right questions can impact the effectiveness and appropriateness of key programmatic decisions that rest on evaluation results. In fact, clearly defined evaluation questions are part and parcel to a successful evaluation – that is, one that is actionable and is used.  (more…)

Checklists, and Protocols, and Rubrics…oh My! Evidence without Anguish?

Effective evaluation requires evidence. Documentation and data are the lifeblood of evidence. How can an evidence-based organizational culture balance the need to feed on the artifacts of work and other outputs and still respect the responsibilities of those producing them? How can we collect rich and meaningful data that informs our work and helps us make effective programmatic decisions while reducing respondent burden?

We are a data-hungry yet over-surveyed over-observed over-interviewed generation of workers. While data collection and analysis continue to grow and embed themselves into the fibers of organizational culture until they are indistinguishable from the “work,” are they considered the crabgrass or the crocuses? Insidious or delightful? Do they help or hinder the work?  (more…)

Like an evaluator in a data store…

I got to thinking about what I love about evaluation and it occurred to me that at least one aspect of it is particularly appealing due to its remarkable resemblance to shopping. Yup, that’s what it is. What I love about evaluation is collecting data!

The same rush of excitement I would get during a day at the mall I now get checking SurveyMonkey for incoming responses. It’s that “thrill of the hunt”, the acquisition, the addition to the collection that ignites the passion. As I become less materialistic and more fiscally responsible (as my golden years approach), I find myself engaging more in the latter than the former.

It happens even prior to data collection. Have you been to the Mall of America? Like planning out my day – thinking about what am I shopping for, how much I really need vs. how much I want, and which stores I will visit – I love designing the surveys. Who do I need to reach? How many of them? What do I really need to know from them vs. what I want to know? How will I get the biggest “bang for my buck?”