Skip to content

The Royal Albert Memorial Museum: Criminal Ornamentation

The Royal Albert Memorial Museum (RAMM) in Exeter is the first cultural organisation to use the new Impact & Insight Toolkit (Toolkit). It has used the Toolkit to evaluate a new, temporary exhibition during Jan-March 2019.

Installation image of 'Criminal Ornamentation'

Evaluating ‘Criminal Ornamentation: Yinka Shonibare MBE curates the Arts Council Collection’

The Royal Albert Memorial Museum (RAMM) in Exeter is the first cultural organisation to use the new Impact & Insight Toolkit (Toolkit). It has used the Toolkit to evaluate a new, temporary exhibition during Jan-March 2019. RAMM has ambitious audience-related goals associated with an increase in funding for contemporary art and its objectives are two-fold: to encourage people who like contemporary art but who normally wouldn’t visit RAMM to come to the museum, and to encourage RAMM’s core audiences to interact with contemporary art. (These are museum-goers who may not normally consider attending an exhibition of contemporary art.) Robust evaluation is therefore extremely important for the museum.

With just two weeks to go before our exhibition opened, we were able to understand the evaluation method, set up our online questionnaires, train volunteers and start collecting data. The Culture Counts team, particularly Siân Tattersall, was really supportive on our workshop day, and at other times, to answer our questions and help get us started.

The great thing about the platform is we gained feedback straightaway. We learned quickly that our peers loved the exhibition making them feel ‘lucky that such a good exhibition had come to Exeter’ and giving higher scores than we expected on Captivation, Enjoyment, Rigour and Risk. We also learned the public were more captivated and enjoyed the exhibition more than we had expected, making comments like ‘thought [RAMM] was more traditional but now think it’s more critical and contemporary’ and ‘It’s great to see the walls filled with different media letting people interact (taking pictures) makes it more fun’. We also learned that this exhibition has been successful in attracting a younger audience (in the first two weeks, at the time of writing, our top age range was 19-24 year olds).

Designing a survey is simple, intuitive and custom questions can be added easily. We found it helpful to start with a self-survey, giving us quantifiable data on what we wanted to achieve with this exhibition. Some of our targets were specific to our organisation (for example we wanted to know if the exhibition has changed people’s view of RAMM) so having the option to easily add custom questions was very helpful. During the exhibition, it has been straightforward to gather the data and feedback to museum staff.

There were a few things we didn’t pick up from the workshop training and wish we had known. For example, we used the live survey to train our volunteer interviewers a day before our exhibition opened. Unfortunately, this meant data taken during training is included when we would have preferred it to be excluded. For future evaluations, we will create a copy of the survey and call it Survey for training. Another thing we learned is the automated email to our peer reviewers landed in their junk email and may have stayed there forever if we hadn’t realised! A better way to reach our peer reviewers was by sending individual emails containing their individual web link.

There are a few things we would like to change about the platform:

  1. surveys to link with audience finder so we can get audience segment data from the evaluation
  2. the interview based survey to have the same front page as the web survey as our interviewers found it a bit off-putting to have to launch straight into the questions without an explanation page
  3. self-evaluation results included on the combined results graph – currently they do not appear until a post evaluation survey is complete
  4. better clarity throughout the platform on whether self, peer or public surveys/results are being looked at. Currently, all three options are always visible and this can be confusing.

Despite these small issues, we have found that the Impact & Insight Toolkit has made an evaluation efficient to set-up, easy to share with colleagues and simple to repeat. We are looking forward to building our evaluation database and also seeing how it works for other Arts organisations – particularly any museums!

Written by Sara Flint, RAMM data officer

Photo credit: Alex Campbell Photography 2019