| hide details 9:36 AM (12 minutes ago) | ||||||||||||||||||||||||||||
Here's a surprise: I focus first on issues of use. How were findings used? What has happened to recommendations (if some were generated)? What issues have surfaced in putting findings to use? (This opens up and circles back to how the evaluation was conducted and the kinds of issues Jane astutely surfaced in her report on synthesizing what happened in a set of evaluations -- very important observations, Jane).
Having dealt with use of findings and recs, I turn to process issues: What was particularly useful about the evaluation process? Not useful? Strengths, weaknesses, gaps? What, if any, impacts did the evaluation process have (separate from use of findings, i.e., process uses)?
I don't ask directly about the evaluator's or consultant's value-added, but comments on this inevitably emerge from the focus on findings and process uses.
As regards the value of doing this after the fact, I think of it as walking the talk of evaluation. If we expect programs to evaluate their outcomes and impacts, we need to model (i.e., role model) good evaluation by evaluating our own practice and work. And, of course, it informs our scholarship and inquiries into effective and useful evaluation, which in my case means I get to use the findings in my writings, an added cost-benefit of doing this. :)
Michael Quinn Patton
Utilization-Focused EvaluationSaint Paul, MN
MQPatton@Prodigy.net
No comments:
Post a Comment