When you’re building evaluations through the Culture Counts platform, you’ll notice that there is a variety of standardised metrics to choose from, beyond those that are included within any template you have been provided. For projects that include elements of audience participation such as workshops, community projects, and discussion groups, we have dedicated participatory metrics that can be used, specifically tailored towards an evaluation of these works.
Between 2015 and 2016 Counting What Counts facilitated a sector-led and Arts Council England-supported project, the Quality Metrics National Test (QMNT), during which we set out to develop a set of standardised participatory metrics to assist in the measurement of the impact of participatory work across the arts and cultural sector. It became clear that the specific outcomes cultural organisations were most interested in measuring broadly fitted into three categories: conducive environment, participant experience and participant development. The results of the QMNT also showed that the metrics developed largely aligned with Arts Council’s Quality Principles for Working with Children and Young People. The report on this process was published and is available here.
Whilst these results were very positive and suggested that we can be confident in the metrics, they have now been passed onto the rest of the cultural sector for ongoing refinement; more work can always be done to develop them as the sector continues to evolve. We’ll be exploring this further through the Artform and Participatory Metrics Strand, which is scheduled to commence this winter, 2019. Each Culture Counts user has access to these participatory metrics in their dashboard; examples of these metrics include:
- Skills: I gained new skills
- Contribution: I felt like my contribution mattered
- Confidence: I feel more confident about doing new things
- Voice: My ideas were taken seriously
When thinking about any evaluation strategy, you should consider adapting it on a case-by-case basis to ensure it is appropriate for the event in question. For example, if collecting quantitative data using the Toolkit, you could gather this data using tablets during the debriefing element of the experience. There have been organisations that have incorporated data collection in really innovative ways within the creative experience itself. If you’re collecting feedback in a more informal fashion, perhaps simply through conversation or using pictures and colours, you will be able to interpret this through methods such as ethnographic observations and other qualitative research.
A question we often hear is why and how self and/or peer review might be implemented when delivering and evaluating participatory activities. Obtaining different perspectives can help you clarify your objectives for the work and can also assist you in understanding where the work might sit in the wider arts and cultural sector – what does a fellow professional consider the impact of your work to be? With such a wide range of participatory work, there may be some cultural activities that aren’t appropriate to invite a peer reviewer to. This could be because the project is takes place over a long period of time, such as a weekly workshop over a 12-week period; a very small group of participants; a particularly sensitive subject matter.
A one-size-fits-all approach to evaluation is never going to be flawless for ‘all’; this is especially true in the case of participatory work. However, it is important to have a method through which you can understand the impact your participatory offerings have. Our support team is always available to answer any questions you may have about this process, and we can offer advice on how best to conduct your surveys if you are unsure. Please remember that in addition to the 4 Arts Council mandatory evaluations, all users are welcome to use the Toolkit to build their own evaluations, using any combination of standardised dimensions and custom questions; there is no limit to the number of evaluations you can create.