New Tools Available for Measuring Interpretation's Impact

Weiler, B. ., & Ham, S. H. (2010). Development of a research instrument for evaluating the visitor outcomes of face-to-face interpretation. Visitor Studies, 13, 187-205.

Although the literature is full of published results of evaluations of interpretive programs, the authors of this paper find most of the studies of limited use in informing other programs, because most evaluations tend to be time- and site-specific and use customized methods that are not appropriate for application in other settings. So the authors set out to create an evaluation package that's easy to use, inexpensive, reliable, flexible, ethical, and scientifically sound (among other considerations). The evaluation tools were developed for face-to-face interpretive programs at heritage sites.

To start, the authors defined indicators of visitor outcomes in the domains of cognition (for example, what visitors learn in interpretive programs), affect (such as how visitor attitudes are affected by interpretive programs), and behavior (for example, how interpretive programs affect what visitors do). The researchers gathered representatives from two Australian institutions that offer interpretive programs—Port Arthur and Sovereign Hill. The staff members were from all institutional levels, including front-line interpreters, program managers, and executive-level administrators. The groups brainstormed about indicators of “successful” or “effective” interpretation. An industry advisory group then revised and consolidated that list to yield eleven classes of outcomes, or indicators. Examples of the indicators include the extent to which the interpretive program contributed to a positive attitude toward heritage preservation, a desire to participate in more interpretive activities, an intention to purchase a souvenir related to the experience, a desire to stay longer at the site, and an intention to recommend the program or site to others.

Next, the researchers assessed the relative merits of a variety of data collection methods. They rated each method according to its cost, time required to implement, speed of feedback, burden on visitors and staff, validity, and reliability. Their analysis suggested that the questionnaire format best met their criteria “because it can produce high levels of validity and reliability at comparatively low cost and with a relatively small burden on visitors and staff.” They also note that a questionnaire can gather cognitive, affective, and behavioral data in one instrument.

The researchers then developed and tested a particular questionnaire at multiple sites. The researchers administered the instrument post-visit; it typically required about three to five minutes for visitors to complete. The researchers used confirmatory factor analysis to validate the indicators. The final instrument includes 30 questions that measure 10 indicators (the eleventh indicator—visitor interaction with the interpreter—is measured through observation). The researchers have packaged the questionnaire and observation instrument into a toolkit that includes a manual explaining the development of the instrument, sampling methods, data collection and interpretation, and a customized database for analyzing and reporting results. The instrument has been used in a range of settings including national parks, zoos, botanical gardens, wineries, and ecotourism sites.

But the researchers note that, while this is an easy-to-use instrument that they advocate, it does have limitations. The instrument is based on the self-reporting of visitors of their impression of the program's impact. The researchers also note that the instrument does not measure “the longer-term, post-visit impacts of interpretation on visitors such as what they know, feel, and do after they return home.” And although the instrument is intended to measure the impact of face-to-face interpretation, many visitors may actually respond to the questions based on their entire experience at a site, including more than just the interpretive programs. Finally, the researchers note that the questionnaire only reveals what the impacts of a program are, not why the program is or is not achieving the outcomes. More research is needed to determine cause-and-effect relationships.

The Bottom Line

<p>An easy-to-use, low-cost, reliable evaluation instrument is available for assessing the impact of face-to-face interpretive programs at heritage sites. The toolkit package includes the evaluation instrument, a user manual, and a database for data analysis and reporting.</p>