This study tackled a fundamental challenge in environmental education research: the lack of common outcome measures that can apply across diverse programs while remaining sensitive enough to detect meaningful differences in program quality. The researchers asked whether there exists a consistent set of outcomes that all quality environmental education programs for youth could reasonably aspire to achieve, and if so, how to measure them effectively.
The research team conducted an extensive participatory process involving environmental education experts, practitioners, and academic researchers to identify consensus-based crosscutting outcomes. This included workshops with the National Park Service Advisory Board Education Committee, engagement with professional associations like NAAEE and ANCA, and collaboration with leading researchers and program providers. The process drew from four key perspectives on environmental education outcomes: environmental literacy (based on the Tbilisi Declaration's framework of awareness, knowledge, attitudes, skills, and participation), positive youth development (focusing on assets essential to human well-being), academic achievement (alignment with educational standards), and twenty-first century skills (critical thinking, problem-solving, communication, and collaboration).
Through this collaborative effort, the team identified 12 aspirational outcomes that environmental education programs could work toward achieving. These outcomes encompass a broad range of potential impacts including enjoyment, connection to place, learning about human-environment interactions, enhanced interest in learning, development of twenty-first century skills, meaning-making and self-identity formation, self-efficacy, environmental attitudes, and various forms of action orientation toward environmental stewardship, cooperation, and academic engagement.
The researchers then developed survey items to measure these outcomes using established scale development procedures. They employed 11-point scales rather than traditional 5-point Likert scales to address persistent problems with positive skew and lack of variability that plague many environmental education evaluation instruments. Most items used retrospective formats asking participants to reflect on what they learned or how the program influenced them, which helps address response shift bias and testing effects common in pre-post designs for short-duration programs.
The instrument was tested across six diverse environmental education sites representing different program types, durations, and geographic contexts. These included single-day field trips at Great Smoky Mountains National Park and Everglades National Park, multi-day residential programs at three different outdoor schools, and informal visits to the North Carolina Museum of Natural Sciences. The sample included over 1,200 participants from urban and rural areas with diverse demographic backgrounds.
Using confirmatory factor analysis and increasingly stringent statistical tests, the researchers validated the instrument's psychometric properties. The final EE21 scale consists of 10 factors measured by 36 items that demonstrated excellent construct validity, reliability, and measurement invariance across all six testing sites. The statistical analyses confirmed that the scales reliably measure their intended constructs and are sensitive enough to detect meaningful differences between programs with different characteristics and quality levels.
The study's results show significant differences in mean outcome scores across the six program sites, demonstrating that EE21 can effectively discriminate between programs and could be used for comparative research. This sensitivity is crucial for identifying which programmatic approaches lead to better outcomes and for enabling evidence-based program improvement efforts.
The EE21 instrument addresses several important needs in the environmental education field. It provides a standardized tool that can facilitate large-scale comparative studies to identify best practices, enables collective evaluation efforts across multiple programs, and creates opportunities for learning networks where successful organizations can share effective strategies. The instrument is designed to be practical for field use while maintaining scientific rigor, with the complete survey taking a reasonable amount of time for adolescent participants to complete.
The researchers acknowledge several limitations of their approach. The instrument measures general rather than content-specific outcomes, meaning it doesn't assess knowledge about particular environmental issues like climate change or biodiversity loss. Programs would need to add content-specific items as needed. The survey is also subject to typical survey biases including social desirability effects, and some items proved challenging for younger or lower-achieving participants. The researchers suggest that the retrospective pre-post items measuring environmental attitudes and self-efficacy could be modified or dropped if needed without compromising the validity of other scales.
This research represents a significant step forward for the environmental education field by providing the first psychometrically validated common outcomes instrument that can apply across diverse program types and contexts. The EE21 tool enables the field to move beyond anecdotal evidence toward systematic, empirical understanding of what makes environmental education programs effective, ultimately supporting the development of more impactful educational experiences for young people.
The Bottom Line
This research developed and validated the Environmental Education for the twenty-first Century (EE21) instrument, a comprehensive survey tool that measures crosscutting outcomes for youth environmental education programs serving ages 10-14. Through extensive collaboration with practitioners and researchers, followed by rigorous psychometric testing across six diverse program sites, the study established 10 reliable outcome scales that can assess the breadth of what quality environmental education programs aspire to achieve. The EE21 instrument addresses a critical gap in the field by providing a common measurement tool that enables comparative research and evidence-based program improvement across different types of environmental education experiences.