When it comes to evaluation, there’s no one-size-fits-all approach, which is why the Impact & Insight Toolkit was designed to be flexible.
But flexibility can sometimes bring uncertainty: Where should you begin? How can you make sure that your evaluation serves your organisation’s goals?
We suggest trying to approach your evaluation from your end goal and working backwards from there.
What is working backwards?
Working backwards is a problem-solving approach where you begin with your desired outcome in mind, then plan in reverse to figure out how to achieve it.
For instance, do you hope to use the results of an evaluation to:
- Generate advocacy materials?
- Apply to a particular funder for funding?
- Dig deeper into the individual experiences of your audiences/participants/visitors/collaborators?
We know that all these things will matter to you, but establishing your primary end goal will guide your question choice and evaluation structure. In the context of creating an evaluation, it’s useful when:
- You are clear about what you want to understand about your work and/or the people experiencing it
- You wish your evaluation to tell a compelling story about your work
- You want to make sure your data is actionable and aligned with your goals
This blogpost explores how applying the ‘working backwards’ concept to the Toolkit can help you build a stronger foundation for evaluation, tailor your surveys, and ultimately make more confident, data-driven decisions.
Image credit: Brendan Church
Step 1: Identify your end goal
Before you even look at the Toolkit, consider your end goal.
For instance, are you hoping to apply for funding that prioritises risky or challenging arts experiences? Or are you aiming to produce advocacy materials which emphasise that your participants have a great time when they’re with you? Or maybe you want to demonstrate that your work is not currently engaging a particular demographic, and you want the data-based evidence to back up this anecdotal evidence, to prompt change within your organisation.
Whatever your end goal is, keep this in mind as you go through the evaluation creation process. Remember, your evaluation needs to result in data that will be genuinely useful for you.
Step 2: Move onto the Toolkit – articulate your ambitions
Having established your end goal, you can now move onto the Toolkit.
The first step is to define what quality means for your organisation. Everyone’s definition is different, and the Toolkit is deliberately designed to allow you to state yours clearly.
We recommend starting with your mission statement, core values, or strategic goals. Then, map those against the Toolkit’s Dimensions Framework to identify the questions that align most closely. This process helps lay the groundwork for a meaningful evaluation.
Whilst doing this, keep in mind your end goal.
Using our first example, if you want to apply for funding to an organisation that prioritises funding risky or challenging work, make sure that you include questions in your evaluation that demonstrate this is also important to you. Might this mean that you have 4 dimensions which directly align with your mission statement, and one which is more on a tangent but will ‘speak’ to your end goal? Quite possibly, and that’s just fine!
Read more about how to articulate your ambitions and select dimensions for your organisation in our Evaluation Guide.
Image credit: Senning Luk
Step 3: Jump to the end with reporting & analysis
Once you have identified your ambitions and chosen your dimensions, it may feel appropriate to move straight into survey creation. However, before you do, it might be an idea to look ahead at the reporting and analysis tools available in the Toolkit.
Why? Because understanding what your data will look like once it has been collected can potentially change how you choose to design your evaluation.
For example:
- Knowing that it is possible to compare your dimension results by age or gender within the Analytics Dashboard may encourage you to include standardised Question Bank demographic questions within your public survey.
- Knowing that, within the Insights Report, stacked level of agreement charts are available, may influence you to include dimension questions that will help you make statements that advocate for your organisation.
- Understanding how standardised questions, such as those found in the Question Bank or the dimensions, unlock richer analysis and benchmarking opportunities within the Reporting Dashboard may inspire you to feature them in your evaluations, rather than creating your own custom questions.
By fast-forwarding to the desired end point, you can pedal backwards, making sure every question you include is purposeful.
Image Credit: Annie Spratt
Step 4: Design your evaluation with intention
Once you have stated your ambitions and understand how your data will be reported, you’re ready to build your evaluation. The Toolkit provides great flexibility when it comes to designing your evaluation, giving you several options while:
Choosing a Template
Firstly, choose a template that matches the type of work you are evaluating (e.g., exhibition, performance, or participatory work). These templates are designed to suit different delivery contexts and come populated with carefully considered questions, along with your chosen dimension statements.
Choosing Your Respondent Types
Consider who you are looking to gather feedback from. Are you looking for feedback from public audiences? Self-assessors from your own team? Peer reviewers whose professional opinions you value? Each respondent type brings a different perspective. You may choose to use all three, or just one — it depends on what kind of insights you are looking for.
Choosing Your Questions
You can choose to stick with your chosen template if you feel that it is a good fit, or you can customise it so that it suits your needs. You can swap in other questions from the Question Bank or include dimensions and write custom questions specific to your work. Make sure the questions you choose are purposeful and will guide you towards the end goal.
Grouping your surveys
How you choose to group your surveys also affects the reporting options that will be available to you once you have collected your data. For example, creating an evaluation for an annual festival that includes a new public survey for attendees each year will enable you to compare your data year-on-year within the Analytics Dashboard. Conversely, creating a dedicated evaluation for each year of the festival will provide you with a breakdown of how the dimension scores compare from each year to the next within the Reporting Dashboard.
It is possible to move surveys from one evaluation to another within Culture Counts to enable you to make the most of the different Toolkit analysis features, but having an idea of which set up will suit you best from the outset could save you time when it comes to interrogating your data later on.
How you structure your evaluations and design your surveys is entirely up to you, but we encourage you to think about these things as you go through the process.
Image credit: Alexander Andrews
Closing thoughts
Approaching your evaluation from the end goal enables you to plan with purpose, ensuring you have the data required to effectively deliver against your stated ambitions.
By starting with what you want your evaluation to achieve and understanding how your data will be reported, you can design more strategic evaluations tailored to your specific data-collection needs.
This ensures you’re building a strong foundation for meaningful data collection — giving you the clarity and confidence to make decisions grounded in evidence.
If you have an end goal in mind that you would like your evaluation to support, but are uncertain how to go about it, just get in touch and we can guide you accordingly.
Featured image credit: Daniel Schludi