Tuesday, October 24, 2006

Selecting an Evaluator - A Primer

Notes Regarding Professional Evaluation Work and Evaluation Reporting

David S. Robinson
www.evaluationhelp.com

If I were interested in employing an evaluator or evaluation firm, I would be interested in how responsive the evaluator has been to his clients with respect to evaluation needs.

Evaluator Checklist
• Can he/she complete an evaluation report in a timely manner?
• Is he/she respectful of the program and the complex environment or context in which it operates?
• Does he/she listen or is he single minded about the way evaluation must be done?
• Does he/she understand the concepts behind the program, the program theory or logical connections between the intervention and the outcomes of the intervention?
• Does he/she write well? Is he conscientious and detail oriented?
• Does he/she stay focused or wander in his questions and contributions?
• Does he/she search out seminal articles that are research oriented and share them with the program staff as appropriate?

Brief Background
I was the Director of Research and Evaluation for MSPCC for 24 years. I have been teaching research and practice evaluation since 1992 and I continue to assist the Simmons College School of Social Work in several ways. I have had a federal grant from the Head Start Bureau (1998 - 2001) and was the principal investigator on the study. I helped design the practice evaluation course at Simmons College. I consult to community organizations in the Boston area as part of the School of Social Work Research Institute. I started DSRobinson & Associates in 2004 and currently consult to medical training programs, public health initiatives, and child welfare programs in Massachusetts and Rhode Island.

My Evaluation Practice
I am trained in Empowerment Evaluation, and have been a member of the American Evaluation Association since 1990, and a current member of APA (American Psychological Association). Here are some points that I would consider noting about my experience and work.

1. Before beginning to write an evaluation plan, I tend to ask many questions, or to re-state what I have heard or read about the program or similar programs, always trying to clarify what the program director (or key leader) is attempting to do. I may offer up prior research articles in similar areas, or I may contribute program evaluation articles to clarify the kinds of things that evaluators do, define new terms, suggest articles or people working in the same or similar area. All of this preliminary work, which may take more than one meeting, is designed to explore with the director an evaluation approach that will be helpful, answer important questions, help strengthen or improve the program, contribute to scientific knowledge if the field, and be a positive learning experience for program participants.

2. After learning as much as I can about the program theory and actual intervention activities, I try to establish a logic model or conceptual model of the program graphically and in text so that all of the key participants can comment on it, revise it, think about the intervention elements that are linked to outcomes, "evaluate" how accurately I have summarized the program and context. This phase of the evaluation is participatory and often empowering for program directors, staff and participants. Sometimes this is experienced as frustrating to some – “Enough discussion, already!” may be expressed. A little bit of frustration with the speed of evaluation planning is healthy. If I hear it repeatedly, I quickly make an initial plan on paper, before I feel ready to do so, to help the program members feel that progress is being made, and to give me a little more time to assess evaluation readiness.

3. Evaluation Design. The formulation of the evaluation design is a key topic with me, and, I think, sets the tone and conditions for the most important evaluation work ahead. I know that most program directors want their program to be accepted by their colleagues as "evidence-based" and they are aware that achieving this level of reception is rare and challenging. I try to be forthright with them that achieving the status of a "science-based" program is a long and arduous road with many obstacles in the real world. Firstly, I emphasize the strongest possible experimental design that will fit the context of the program, and discuss the trade-offs that will accompany each decision to move toward more quasi-experimental or exploratory designs. Secondly, I help the program leaders think about the best design for their objectives, and try to give them the tools (words and attitudes) to help them create a culture of self-reflection and professionalism worthy of the scientific method. But I also emphasize the value of qualitative methods best used to capture the larger context of the program, and the peculiar circumstances of implementing the program in its setting. What do participants say about the program? How does the program appear to the interveners? Where did the intervention vary from the plan? These are some of the questions I raise as I incorporate the answers into the evaluation design.

4. I am careful to research standardized measures for the outcomes envisioned by the conceptual model of the program, but I am open to consider indicators suggested by the program participants. I will give my advice about why one approach is more explainable, or will be more acceptable to the scientific community, but I am flexible on this. I believe that many of the most powerful and innovative aspects of programs are not adequately captured by existing standardized instruments. I am willing to create new items and modify them according to the needs and circumstances of the setting and the characteristics of the participants.

5. I carefully develop data collection methods that fit with the participants, and make use of innovative collection methods that are consistent with the characteristics and values of the participants. I will use computer-based on-line methods or paper and pencil methods - to ensure that all participants have an opportunity to respond. I often make suggestions for incentives, because I believe that participants' time is valuable and should be compensated whenever I ask them to do some work. I am prepared to write applications for human subject protection at the sponsoring institutions (IRB Committee applications vary but essentially are consistent with the federal IRB guidelines located at http://www.hhs.gov/ohrp/ , and will write and revise informed consent documents as needed. I will prepare responses to outside commentaries by experts who have suggestions for improvement - and make those changes even at the last minute when called upon.

6. Reporting Results. I am careful to check the quality of the data before initiating analyses. I use a step-wise analytical strategy, first analyzing frequencies for each question, and then move to more and more complex analyses. I then review each evaluation question, and attempt to answer each evaluation question using appropriate statistical analyses, often combining questions (or items) into compound constructs (combined variables) to allow for more sophisticated analyses appropriate for the question. (I have found that the simplest appearing evaluation questions are often the most complex.) I write readable reports tailored to the audience, professionally documented that include background, organizational context, literature review, methods, results, conclusions and recommendations, and abstract or executive summary. I try to include text and graphic representations of the results whenever possible to make the report attractive and understandable to different kinds of readers.

No comments: