4 November 2018
By Paul Kennedy, ScienceLink
Science communication efforts usually aim to teach and inform, influence decision-making in policy and governance, or excite public interest in scientific or technical topics. But how can you tell whether your efforts are having the desired effect, or if you’re even reaching the right people?
Dr Eric Jensen (@JensenWarwick) of Warwick University led a day-long practical programme at CREST on Friday 2 November entitled Evaluation in the field of public science communication, which provided science communicators from across South Africa with approaches and tools to measure and understand the impact of the work they are doing. This could help them test the effects of a school science outreach initiative, or measure the influence of science communication efforts on public policy.
Participants dived straight into practical discussions of what they know about their intended audience, and how they know that information. Jensen then introduced some of the theoretical aspects of measuring impact, and how to go about setting useful objectives for measuring your science communication impact.
Planning for good evaluation is important: “Make sure there is good alignment between what you’re doing and what you’re trying to achieve,” he said.
He also highlighted that measuring your impact is only the first step towards improving your science communication efforts. “Good evaluation requires humility,” he said. “You have to take a scientific approach: accept that your approach is provisional and adjust when you learn new information.”
Jensen emphasised the difference between outputs and outcomes: for maximum impact, a science communicator needs to focus on the outcomes of a communication or engagement effort. Outcomes for people engaging with a science communication effort can include learning or development; change in attitudes; improved knowledge, skills or interest; or improved confidence in a particular area of competence or knowledge.
The second half of the workshop focused on survey design, as a well-designed survey is one of the best ways to evaluate the impact of your science communication work. Again, this was a practical session where participants were shown the differences between good and bad surveys, things to avoid (like leading questions or allowing various biases to influence the outcome of the survey), and the correct way to design useful questions. He covered two of the most important tools in survey design: the repeated measures design of surveys, and close- and open-ended questions.
Jensen wrapped up by answering questions about the participants’ own survey designs, guided participants to write their own survey questions and gave advice to those that needed it.
This workshop precedes the inaugural SCICOM100 conference, which has the theme ‘Science communication in a democratic South Africa: prospects and challenges’ and runs at CREST in Stellenbosch from 5 – 7 November 2018.