A case study from the Derby Museums as a part of the Quality Metrics National Test in the UK.
With very special thanks to Derby Museums for their results and insight for this case study.
Derby Museums completed three evaluations using Culture Counts over the course of the trial period, all of which were focusing on very different events.
The results were shared at a full staff briefing, where a cross section of the whole team was interested to hear the results. Using a more qualitative style of data presented via the quality metrics enabled the team to view a more rounded perspective, as opposed to only looking at hard figures, which doesn’t necessarily do justice to the work or its outcomes. Enabling the team to see how satisfied their customers were provided a real morale boost.
The results revealed some surprises, showing high levels of satisfaction where perhaps the team hadn’t felt things had been as successful or relevant. Seeing the nuances between the different events and the scores received can be interesting, as there may be elements of the event that particularly resonated with the attendees, which the organisation may not have previously realised with such clarity. This highlighted the usefulness of the self-assessor completing both the prior and post surveys. Through completing the self-assessments, they also realised that their intentions could be more focused. Having clearer intentions from the offset will allow the focus of the work to really come through. By mapping expectations and experiences, a richer frame for interpretation is formed.
Combining results from the Quality Metrics survey and demographics from other survey providers, Derby Museums have been able to further develop their applications for funding. Many funding bodies have specific goals to fulfil or demographics to reach and therefore their funds must be allocated appropriately; for example, this could be the elderly, rural communities, ethnic minorities etc. Incorporating the results from the Quality Metrics survey, Derby Museums have presented a positive case in funding applications and funding reports. In particular, this has been useful when asked questions about the audience’s thoughts and experiences, as they were able to refer directly to the positive feedback gathered from the attendees.
There is also the potential to collect more detailed demographics and to gather further data focusing on Generic Learning Outcomes. This could be achieved using the system as it currently stands; however, adding further questions was not actively encouraged during the Quality Metrics National Test.