Do’s:

  • Be clear as to who your audience is and whether they are online or offline. In order to measure the impact of your public engagement activities, it is necessary to have direct knowledge of and access to your audience. Different methods of interaction will also require different approaches and different definitions of “target audience.”
  • Adopt a participatory approach as much as possible, and engage your stakeholders at all stages in the monitoring and evaluation process. Make your evaluation fun and empowering for participants. Close the loop with your participants by reporting back to them on your findings.

Don’ts:

  • Do not use broad audience targets, such as ‘the general public’ or ‘Canadians’. Measuring the engagement, knowledge, interests or behaviour changes of all Canadians is likely not realistic for your organization. This would need to be done through public polling. Instead, use existing research to situate your public engagement activities in the broader Canadian context; this will help your public engagement be credible and realistic, and will allow you to show more efficiently the impact of your activities in your reports.

Practitioners’ perspectives:

“In 2011, we received funding for a two-year project to engage youth on issues related to gender as well as to train volunteers as peer educators. One of the first steps that we undertook was to meet with an external evaluator to develop a comprehensive evaluation plan. In these initial planning stages, we identified our different participants – volunteer peer educators, workshop participants, community mentors, teachers and others – and identified the different outcomes and objectives that the project would lead to for each of them.

Once we had articulated and documented the targets, outcomes and objectives, we were able to create the evaluation tools that we needed to monitor the project’s success in engaging these different audiences along the way. In the end, we developed short surveys for workshop participants and teachers, an interview guide for mentors, and a series of surveys, focus groups, and arts-based tools for the volunteer peer educators (the most engaged participants in the project).

By having these discussions early on in the project and recognizing that we had different audiences for different activities, we were able to prepare the evaluation tools that we needed and were ready to hit the ground running regarding evaluation as soon as we started on the different project activities. Importantly, while this project had some funding dedicated to working with an external evaluator, we extensively documented our experiences working with her and used those lessons (e.g., starting early, developing diverse tools for diverse audiences, ensuring strong baseline data, etc.) and adapted those tools for other projects where we were not able to support an external evaluator.”

– Project Coordinator for a small NGO