Over 200 residential environmental education centers (REECs) exist in the United States, offering students immersive experiences in nature. Like many other informal education outlets, REECs struggle with evaluating student learning outcomes in their programs due to a lack of time, money, and expertise.
This study presents an updated analysis of evaluation practices in REECs and compares this analysis with findings from a similar study conducted by Chenery and Hammerman, published in the Journal of Environmental Education in 1985. For this research, the authors adapted the survey used in the Chenery and Hammerman paper, including questions about evaluation practices and demographics. The authors included eight additional questions about overall satisfaction, barriers, and needs regarding evaluation. The survey was delivered electronically to program directors; 205 REECs received the survey and 114 responded.
In terms of demographics, most of the responding centers were from rural areas (78%), self-funded (80%), and focused on science education (97%). When asked about evaluation, 85% of directors said that they participated in developing program evaluations and 78% of directors said they were involved in conducting evaluation. In addition to the directors, full-time staff also participated often in the development and execution of evaluations, whereas independent researchers were rarely involved.
Evaluations at REECs mostly included teacher surveys (91%) and program observation (82%). Student surveys and discussions were used by fewer than half of the responding institutions. Primarily, evaluations included measures of teacher and student satisfaction, as well as operations and logistics. Of the respondents, 71% said their current evaluation practices met their needs, while 61% stated their evaluations met the needs of stakeholders (e.g., teachers, students, parents, and administrators).
When asked about barriers to conducting evaluation, respondents said that limited funding, time, and knowledge of evaluation practices were their primary obstacles. Respondents said that it was difficult to meaningfully engage students in evaluation and that they were challenged to find enough time to properly administer evaluation tools, especially without detracting from time spent outdoors. Additionally, desired outcomes and needs varied depending on the individual requirements of schools and teachers, so adapting evaluation approaches to these myriad situations was difficult.
Directors were also asked to describe evaluation needs in their programs. Most respondents said they were interested in developing evaluation tools that could indicate shifts in student attitudes or behavior toward the environment, or the impact of the program on academic achievement. Respondents also indicated they desired more rigorous evaluation tools that provided a deeper level of information on the student experience. Lastly, directors said they would generally like their staff to have more capacity in evaluation. Participants in this study seemed to have conflicting visions of the ideal evaluation, with some wanting in-depth and comprehensive information about the experience, while others desired fast and easy-to-implement instruments.
Compared with Chenery and Hammerman's 1985 study, fewer evaluations gathered data from students, but teachers were more often involved in the evaluation process in the current version of the study. In both studies, evaluation was primarily done at the end of an experience. Both studies showed similar challenges to conducting evaluation, though acceptance of the importance of evaluation now appears to be growing.
The Bottom Line
Environmental education professionals often struggle with the skills necessary to conduct evaluation, which makes the task daunting and difficult. By training staff members in evaluation practices and developing an institutional culture of evaluation, some of the other barriers to evaluation, such as a lack of time and money, may be mitigated. Educators who are trained in evaluation and see its importance to institutional growth may be better equipped to devise innovative approaches to evaluation that fit the specific needs of their institution and its available resources.