Effective evaluation requires evidence. Documentation and data are the lifeblood of evidence. How can an evidence-based organizational culture balance the need to feed on the artifacts of work and other outputs and still respect the responsibilities of those producing them? How can we collect rich and meaningful data that informs our work and helps us make effective programmatic decisions while reducing respondent burden?
We are a data-hungry yet over-surveyed over-observed over-interviewed generation of workers. While data collection and analysis continue to grow and embed themselves into the fibers of organizational culture until they are indistinguishable from the “work,” are they considered the crabgrass or the crocuses? Insidious or delightful? Do they help or hinder the work?
It’s so true that we are data-hungry yet overburdened with the actual data collection. I’d love to find more ways to collect meaningful data that isn’t so intrusive and could actually be used by the evaluand as well as the evaluator on a regular basis.
Well said Mya! I wholeheartedly agree.
Great questions about rubrics. Some evaluators are finding that making these explicit reduces data burden and increases utility. This week we’re focusing on rubrics on BetterEvaluation including a guest blog by Judy Oakden which summarises some learnings about using them and her case study of an evaluation that used rubrics effectively. http://betterevaluation.org/blog/guest-rubrics
Thanks for your comment, Patricia. I have experimented with rubrics, more for teaching than evaluation, and find them invaluable when written well. I’ve also found that creating a truly effective rubric is not as easy as it sounds and a lot of work needs to go into crafting the language to make it useful. I think one of the greatest values in using rubrics is also in their ability to generate common language, expectations, and shared understandings. I’m looking forward to reading the material on rubrics on betterevaluation.org!