Evaluability Assessment Gauges Program's Readiness for Evaluation

Zint, M. T., Covitt, B. A., & Dowd, P. F. (2011). Insights From an Evaluability Assessment of the U.S. Forest Service More Kids in the Woods Initiative. The Journal of Environmental Education, 42, 255-271.

Hoping to generate more opportunities for underserved youth to spend time in the outdoors, the U.S. Forest Service created the More Kids in the Woods (MKIW) grant program. During its first funding cycle, 2007-2008, the initiative funded 26 programs in 17 states with awards ranging from $27,300 to $165,000. The programs were selected based on the extent to which they were likely to engage partners, reach underserved youth, use innovative or proven techniques, develop recreational skills, improve environmental literacy, involve children in stewardship activities, be sustainable over the long term, and evaluate results.

Rather than attempting a full evaluation of this complex grant program (evaluations of grant programs are infrequently published), the authors instead conducted an evaluability assessment. The authors explain that “Evaluability assessment is an analytical, diagnostic pre-evaluation activity intended to (1) determine a program's evaluability (i.e., readiness for evaluation) and (2) obtain insight into what type of evaluation will be useful to decision makers.” According to the authors, programs are not ready for a full evaluation when they lack clear objectives or if the objectives far exceed the plausible outcomes of the program activities, are not implemented as intended, or have not taken into account stakeholders' interests in the evaluation. Evaluability assessments can be helpful in determining if programs are ready for evaluation and in shaping an evaluation that will meet stakeholders' needs. The authors think that this approach should be used more widely, and hope that this paper will serve as a model for how to do it.

The authors first created a logic model for the MKIW initiative based on a review of the funded project proposals. They then used the logic model to develop a questionnaire that was sent to the project leaders for each of the 26 funded projects. The survey contained both closed- and open-ended questions and gauged the funded programs' objectives, implementation, perceived success, and interest in evaluation. The response rate was 73%.

Responses to the surveys indicated that most of the projects were implemented as planned. Most of the programs reported successfully employing partnerships, reaching their target audience, and providing opportunities for youth to spend time in natural areas. Most reported that they thought their participants were satisfied with their experience. Most believed that the programs had an effect on participants' environmental behaviors, but most based this belief on assumptions about the outcomes of project activities (for example, program planners/implementers believed that participants who spent time in remote wilderness areas would have a greater respect for nature). Although most (87%) of the program leaders could point to evaluation data supporting participant numbers and satisfaction, few (13%) had evaluation results supporting participant outcomes. And although most of the respondents indicated they were interested in evaluation, most also indicated limited expertise in and resources for evaluation.

Based in the results of the evaluability assessment, the authors conclude that the MKIW program is ready for a full evaluation because it has clear objectives, the projects are being implemented as planned, the program has plausible benefits, and project leaders are interested in evaluation. And the full evaluation can be informed by this initial assessment. For example, the authors note that an evaluation of the programs' impacts on participants' knowledge and attitudes would be appropriate. They are more skeptical, however, of the benefits of evaluating changes in behavior, as previous research suggests that programs must deliberately focus on behavior change, with a longer-term focus and a basis in behavior-change theory, to be effective. The programs assessed in this research did not meet these rigorous criteria for being likely to affect behavior.

The authors note that few evaluations have been conducted of the overall impact of grant programs (rather than each individual project funded under the grant program), and that this assessment “suggests that grant programs can make significant contributions to environmental education through the partnerships they foster.” The authors also hope this research spurs more environmental educators to use this approach, even if it may seem like more work. “We recognize that evaluability assessments may be perceived as yet another layer of evaluation,” they explain, “but . . . we believe that its benefits outweigh costs.”

The Bottom Line

<p>Evaluability assessments can be a helpful tool in determining whether a program is ripe for a full evaluation. Such an assessment can reveal if the program's objectives are clear and reasonable, if they're being implemented as planned, and if the stakeholders' interests are accounted for. If the assessment reveals that these criteria are met, the evaluability assessment can also point to specific areas for investigation in the full evaluation. This kind of assessment isn't necessary in every program evaluation, but it can be appropriate and helpful in certain situations.</p>