Skip to content

It was recently announced that Counting What Counts (CWC) made a successful bid to Arts Council England (ACE) to continue being the provider of the Impact & Insight Toolkit (Toolkit) for 2023-26.

We’ve learned a tremendous amount over the past 4 years of delivering the first iteration of the Toolkit; the lessons we’ve learned in doing this played a big part in our proposals to ACE for the second iteration. We’re delighted that we will be able to implement these plans for the organisations which choose to use the Toolkit going forwards.

Since the beginning of the project, it has been our intention to be as open as we can to help the cultural sector and to maximise the value from our work that is funded by public money.

To continue with the transparency, this blogpost is going to highlight some of the biggest lessons we’ve learned. We have been listening throughout the project and we want to give you an indication of what is to come next year!

The need to focus on understanding how an evaluated project contributes to wider objectives, rather than looking at them just in isolation

The Toolkit remains a powerful tool for the evaluation of independent events in a programme of work.  However, we’re keen to place more focus into interpreting evaluations ‘in the round’, as part of a larger evaluation strategy.

An evaluation should be a rigorous and structured assessment of a completed or ongoing activity, intervention, programme or policy that will determine the extent to which it is achieving its objectives and contributing to decision-making

This definition of evaluation is from the Public Health England website, with a few key words and phrases highlighted. Thinking about what we are doing with the Toolkit in light of that definition, we are trying to understand:

  1. How a completed activity contributes to an ongoing programme of work
  2. Whether that activity achieved its own objectives
  3. Whether that programme of activities is achieving its objectives

To do any of these effectively, we need to have the objectives for both the work and programme clearly articulated, then look at multiple evaluations to get a sense for how well each achieved its own goals and how it contributed to the overall goals of the programme.

In the future, we are going to assist organisations in understanding how their work contributes to their overall ambitions and objectives. Which leads us on to the next insight…

 

A laptop with the word 'Goals' writtenPhoto by Clay Banks on Unsplash

Describing your organisation’s vision in terms of the dimensions is necessary to ensure the Toolkit’s process makes sense

This is something which we talked about when we first launched the Toolkit back in 2018 and was highlighted again via our case study work.

The idea is to have senior staff in your organisation (ideally the board) review the Toolkit dimensions and choose a set which reflects the mission or ambitions of the organisation and the work that you produce. Using these dimensions in your evaluations allows you to relate the outcomes back to the overall ambitions of your organisation, as well as to assess whether that individual activity met its own objectives.

Bringing this message to the forefront, we created a video ‘Embedding the Toolkit within your Organisation‘ explaining what this step is and how it can help you with your evaluation.

At the next stage of the Toolkit, we will be implementing more concrete tools to help you with the process of converting your organisation’s mission into the language of the dimensions and then using the Toolkit’s evaluation process to track your progress against your stated ambitions.

 

An image of a globePhoto by CHUTTERSNAP on Unsplash

For many organisations the most pressing barriers to evaluation are specific to their circumstances, rather than things which will affect everyone

It would be great if the biggest hurdle to successful evaluation using the Toolkit was the same for everyone, because then we could target that issue and make a huge difference for everyone.

Unsurprisingly, this is not the case; we know the sector is incredibly diverse. Instead, when we talk with organisations working with the Toolkit, their most pressing concerns stem from issues which are specific to them and maybe a small group of other similar organisations.

This includes things like:

  • Touring organisations
  • Participatory focussed organisations
  • Organisations focussed on producing/delivering work for children/young people
  • Organisations which only produce/deliver a single work per year
  • Consortia of organisations

Some of these issues are logistical (difficulties getting data from venues or partner organisations); some are methodological (sample size issues when dealing with small numbers of respondents); some are analytical (how to maximise insight from a single evaluation without others to compare to); some are limitations in the questions we use (how to survey children/young people effectively).

We have mitigated some of these issues through the development and distribution of targeted guidance and by making the Toolkit more flexible, so there isn’t a mandated one-size-fits-all approach to evaluation.

Some of these issues require further research, development and testing to resolve. But that’s okay; we have ideas and will be focussing more of our time on both adding flexibility and figuring these things out in the coming year.

 

Hurdles lined up, ready for a racePhoto by Josh Boak on Unsplash

Numerical data has benefits such as being standardised and easy to compare, but this doesn’t mean we shouldn’t also invest in complementary qualitative feedback

We’re of course very focussed on the dimensions and their ability to take quantitative measurements of the impacts on the people who experience cultural works. The shared language that we have co-created with the sector and its use for this purpose forms the backbone of the Toolkit project, enabling organisations to collect standardised data which can be compared and benchmarked across different works.

Additionally, qualitative, written feedback can be hard to analyse. If you’ve got hundreds of written comments, it’s very hard to make sense of that data without having to spend hours reading and taking notes.

However, we have learned that even a small amount of open-ended feedback can ‘bring to life’ the quantitative results by providing some much-needed context in which to interpret them. The inclusion of the three words question (‘Please give three words to describe your experience’) is a simple way to do this. It is just one question which is framed in an open-ended way but with enough of a constraint that it makes it more straightforward to analyse and make sense of.

We see the value in investing time in bringing analysis of qualitative feedback to the cohort using the Toolkit and intend to do more of this in the future.

Seemingly random numbers displayed on a machinePhoto by Nick Hillier on Unsplash

 

We are excited to develop our Impact & Insight Toolkit offering to you over the next funding round.  Watch this space to find out more about what’s coming!

 

To read the case studies which informed some of these learnings, please see the links:

Yorkshire Sculpture Park

Queen’s Theatre Hornchurch

To read information on further user engagement which has informed some of these learnings, please explore the following links:

Strategic Development Strand 

Artform and Museum Metric Strand

 

Featured image credit: Photo by Kimberly Farmer on Unsplash

 

Stack of books