This week’s readings all focused on the importance of evaluation. A feature emphasized throughout the semester to create successful digital public history projects, the readings this week got more in depth in exactly how to go about evaluating successfully.
There are many different types of evaluation. In evaluating the success of your project one important step is to analyse your information through analytics. The Tate Museum in the UK focused on the importance of this type of analysis to ensure that you are reaching your audience, achieving your goals, and managing the project in an appropriate way interanlly. The Tate recommends using Google’s free analytics service to see your effectiveness and connect all of the elements of your web presence (social media, emails, website) to see exactly how effective your message is.
However analytics alone cannot determine a project’s success. Both “Leveling Up” and “Museum Evaluation without Borders” recommend moving beyond analytics alone. This is because While they argue that analytics are one important tool, they show its weaknesses since it only quantifies how many users are being reached, instead of qualifying their experience. They recommended the evaluation methods of paper and wireframe testing, play testing, soft launching, surveys, interviews and analysing responses, linking program activities with intended outcomes and impacts, taking a systems-oriented evaluation approach, using affirmative data collection approaches, and daring to have courageous conversations with visitors and staff.
The “Experiencing Exhibitions” article would lead you to believe that museums are not evaluating the visitor experience at all, calling for more cross-disciplinary studies of visitor experiences which do not focus on linear ideas of knowledge. However all of the readings from this week show that is not the case. The next steps called for in this article are actually achieved by the Tate’s evaluation process with its emphasis on experience over education. But perhaps the authors are just trying to call for more emphasis on evaluation. In that case the resources of the Institute of Museum and Library Services (IMLS) would be a great place to start.
The IMLS provides assistance for creating evaluation models under the broad categories of general guides for program evaluation and outcome monitoring, project planning tools for museums and library services, common evaluation methods and terms, measuring outcomes in museums and libraries, and network associations that provide evaluation resources.
Browing these resources, I found the Shaping Outcomes site within the Project Planning Tools section to be most helpful. In looking at Outcome Based Planning and Evaluation (OBPE) gives a step by step process on how to create a project, identify goals, build toward those goals, and help both staff and visitors achieve them. Creating a model similar to Content Strategy, this was helpful in setting up not only the steps for good evalution, but also what makes up each of the components of those steps so that you could clearly define your own needs, unlike the Templates for Creating Logic Models which assumed a certain amount of preexisting knowledge for evaluators.
Evaluation of your project, goals, resources, tools, and outcomes before, during, and after your project has launched is the only way to ensure continued success.