View/download a PDF version of this page here.
If you are using the Dimensions Interpretation Tool for the first time, we strongly recommend reading through this guidance before getting started.
As part of the Impact & Insight Toolkit (Toolkit) project, Counting What Counts (CWC) will be releasing a series of tools to assist in the exploration of the public-facing, anonymous, aggregate dataset. The Dimensions Interpretation Tool is the first of these and as the Toolkit aggregate dataset grows in size and richness, so will the value of the insight that National Portfolio Organisations (NPOs) can gain from using it.
Whilst this tool can be accessed by anyone, it is designed for NPOs as its primary users. What this means is that the features included should provide the most benefit to NPO users and, as we plan to make improvements to the tool over time, it will only be NPO users whose needs are considered. If you have any requests or recommendations on how to make it better, please let us know!
The Dimensions Interpretation Tool has been developed to help NPOs understand what their scores to core dimension questions mean. This tool allows users to compare their survey results to other Toolkit evaluations, providing artform-specific context for their individual results.
This guide will address the following questions:
- What is the Dimensions Interpretation Tool?
- Why you should use the Dimensions Interpretation Tool to interpret your Toolkit data
- Are Toolkit users required to use this tool?
- How do I use it?
- What is the key terminology?
- How do I interpret the data?
- How can the dataset that is powering the Dimensions Interpretation Tool be improved?
1. What is the Dimensions Interpretation Tool?
The Dimensions Interpretation Tool is an interactive tool designed for NPOs using the Impact & Insight Toolkit. Offering access to the aggregated and anonymised dataset, it has been developed to help NPOs better understand and interpret their numerical scores in their evaluations across the core dimension questions. This tool allows users to compare their survey results to other Toolkit evaluations, providing artform-specific context for their individual results.
The data which powers the Dimensions Interpretation Tool is taken directly from submitted Insights Reports, which comprises responses to the core Arts Council England dimension questions only.
Every NPO when looking at one of their Toolkit evaluations will ask: ‘How should I interpret my dimensions scores?’
An individual Toolklit evaluation, viewed in isolation, can tell an NPO whether they have been successful in meeting their creative intentions for that piece of work through the NPO comparing their self, peer, and public results. But without additional interpretive context, provided by the aggregate dataset, it is difficult for an NPO to understand and interpret the numerical scores achieved in a lone evaluation. The Dimensions Interpretation Tool allows NPOs to compare their survey results to other Toolkit evaluations, providing artform-specific context for their individual results. For example, they might discover that what they thought is a ‘low score’ for Originality, is in line with the outcomes achieved by other NPOs in their artform.
In the Toolkit, each dimension is scored from 0 to 1 using two decimal places (e.g. 0.35). Intuitively, we understand that scores at 0 are ‘low’ and scores at 1 are ‘high’. However, is it different for different dimensions? What is a ‘high’ score for Rigour? Is this different to what a ‘high’ score might be for Distinctiveness? Is 0.6 for Rigour comparable to 0.6 for Captivation?
It is therefore crucial to remember that there is no universal description of what ‘success’ looks like. For any piece of work, success is defined by those producing it. As such, it is up to the user of the Toolkit to:
- Decide upon the dimensions which are of most relevance to their work
- Interpret their dimension scores in the context of the larger dataset
- Understand what this means
This is why the Dimensions Interpretation Tool is so useful in helping NPOs to interpret their Toolkit data.
3. Are Toolkit users required to use this tool?
It is not a mandatory requirement for the Dimensions Interpretation Tool to be used; however, it is an important resource that’s been developed to assist NPOs in understanding and interpreting the dimension scores achieved using the Toolkit. Therefore, it is recommended that Toolkit users explore the data presented in this tool as it will help to frame their results, aiding analysis and interpretation.
4. How do I use it?
Access to the Dimensions Interpretation Tool is via the Resources section of the Impact & Insight Toolkit website.
The navigation panel on the left allows the user to move through the tool and shows which section of the Dimensions Interpretation Tool is currently being viewed: Introduction or Dimension Placement.
This tool provides context in which to interpret the scores resulting from a Toolkit survey. It does this by placing a specific score amongst the aggregate dataset for a given dimension and respondent group (e.g. Challenge dimension for public respondents). The scores used in the Dimensions Interpretation Tool are from submitted evaluations.
The scores can be filtered to provide context selected by the user; for example, the user can choose to show only scores which are from evaluations submitted by NPOs with Music as their primary artform.
Slicers allow the user to filter the data which is displayed in the charts or tables on that page. This means that the data can be filtered to show data specific to their artform and other categories. Slicers are always on the left side of the display.
This is what a slicer looks like:
The slicer will indicate which category is selected, if any, and can be cleared using the eraser button on the slicer itself or the Reset Filters button in the bottom left of the Dimensions Interpretation Tool.
5. What is the key terminology?
A metric used to measure an intrinsic quality of a work.
Peer and Public Surveys
Surveys issued to members of the public that have experienced the work are referred to as ‘Public’ surveys.
Surveys issued to invited peer reviewers after the work are called ‘Peer’ surveys.
The collection of surveys used to assess a specific piece of work.
6. How do I interpret the data?
There are two key things that are used to provide interpretation: a histogram and a percentile.
The histogram is a type of chart used to show to spread and range of data. The histogram will show the spread of all scores for the chosen dimension and respondent type. If the user has entered a specific score in the appropriate field, the place in which their score ‘sits’ in the histogram will be presented in a different colour.
The percentile is a statistical measure used for ranking. The percentile in the Tool shows where the entered score places amongst all scores for the selected dimension and respondent type.
For example, you might know that you scored 67 out of 90 on a chemistry test, but that figure has no real meaning unless you know where you rank in comparison to others that took the same test. This rank is called the percentile. If you know that your score is in the 32nd percentile, that means you scored higher than 32% of people who took the same chemistry test.
Each piece of evaluated work will have its own set of unique objectives and creative intentions which aren’t taken into account by the percentile ranking. For this reason, interpreting the percentile isn’t as simple as in the chemistry test example above – higher isn’t necessarily better. However, this limitation is mitigated by the fact that:
- Only the user gets to decide who they are ranked against
- Only the user can see the ranking
- Only the user can decide if this ranking is meaningful to them
It is essential to keep this in mind when considering the percentile shown for each dimension and respondent type.
In addition, there are a few things to remember when interpreting the percentile ranking and position on the histogram:
Currently, evaluations with fewer than 25 public responses are not included in the Tool. This is because we cannot be confident in a small sample size providing an average which is truly representative of the audience’s opinion. As the project continues and richer data is submitted, it is our intention to use margin of error to select evaluations with an appropriate sample size. This will ensure that evaluations where the overall attendance figures are smaller are included.
Volume of data
When certain filters are applied, and the data is ‘sliced’ in various ways, there might only be a small number of evaluations for comparison. A warning will be displayed when there are fewer than 10 evaluations. This is important because if there is only a small number to compare against then the position on the histogram or percentile ranking shown in the Tool might be misleading. For example, if there is only one other Band 1 evaluation for comparison, it will report your score as either the highest or lowest of all Band 1 evaluations!
Not every variable is captured and represented by the Toolkit and this tool. This means there might be factors we can’t see which are influencing the scores. For example, whether it was an outdoor work and it rained; the cost of tickets was higher than average for that type of work; there was a staff shortage at a particular time…
The data that is used to fuel this tool is anonymised data from submitted evaluations. Once a Toolkit user clicks ‘Submit’ when creating an Insights Report, the associated data is ‘tagged’ and is then pulled through to the Dimensions Interpretation Tool.
Therefore, the most helpful thing someone can do to increase this tool’s efficacy is to submit their Impact & Insight Toolkit data as Insights Reports.
Furthermore, ensuring that the submitted data has the appropriate properties selected really enriches the data as it makes the filtering, or ‘slicing’, possible. Therefore, adding properties to Toolkit evaluations greatly increases this tool’s worth and the sector’s understanding of the impact of their work overall.
If you need any support in using the Dimensions Interpretation Tool or would like to discuss any learnings, do not hesitate to get in touch by contacting firstname.lastname@example.org or calling us at +44 (0) 1223 656 255.
 “Aggregate data refers to […] information that is (1) collected from multiple sources and/or on multiple measures, variables, or individuals and (2) compiled into data summaries or summary reports” For more information, please see https://www.edglossary.org/aggregate-data/
 “Margin of error […] tells you how much you can expect your survey results to reflect the views from the overall population” For more information please see https://www.surveymonkey.co.uk/mp/margin-of-error-calculator/
The information on this page was last updated on 17 November, 2020.