CoAct Webinar
Jan. 27th, 2021
Home » News » CoAct Webinar Jan. 27th, 2021

CoAct Webinar
Jan. 27th, 2021

On January 27th, 2021 we organised the first public CoAct webinar: Co-shaping evaluation in Citizen Science? Towards more participatory approaches in evaluation of Citizen Science in cooperation with ECSA and EU-Citizen.Science.

Speakers:
Anna Cigarini (University of Barcelona, CoAct)
Johannes Jäger (IEA Paris)
Barbara Kieslinger (ZSI, CoAct)
Katja Mayer (ZSI – CoAct, University of Vienna)
Obialunanma Nnaobi (Vilsquare)
Teresa Schäfer (ZSI, CoAct)
Katie Richards- Schuster (University of Michigan)

Table of contents

Click to jump to section:

Agenda and formats

Min Section
10Arrival and Welcome
30Co-Evaluation Primer: Barbara Kieslinger, Katja Mayer, Teresa Schäfer
5Break
45Conversations on experiences: Katie Richards- Schuster, Obialunanma Nnaobi, Johannes Jäger, Anna Cigarini
30Discussion/Q&A
15Feedback and Sendoff

Introduction

Citizen Science is a means of bridging science and society. In addition to the generation of scientific knowledge, Citizen Science activities are particularly well-equipped to respond to societally relevant questions, contribute to science communication and foster scientific literacy in society. While all these aspects are highly relevant for citizen engagement, empowerment and social innovation, they are rarely evaluated in a coherent way. Current evaluation activities in Citizen Science tend to focus on scientific aims, data reliability, and at most the socio-ecological relevance of the results. In the case of projects with a more accentuated educational goal, these are complemented by an assessment of the learning gains at the level of individual participants. Wider societal and political implications are hardly ever assessed, which is exacerbated by the fact that they are notoriously hard to measure. 

During the discussions at the 2020 ECSA Conference, it became clear that there are already a lot of evaluation instruments available – including digital ones – and that some of them also enable participatory dimensions. However, it was reported that few of these instruments are adopted, if any at all. Is it because they are too little known? Is it because it is so difficult to create content-independent, digital environments that enable participatory evaluation for many domains and research questions? Or is it because evaluation is often tacked on to ensure compliance, instead of being a central part of research design? This webinar is dedicated to discussing strategies, formats and tools for participatory evaluation with a special focus on co-evaluation. 

Co-evaluation is a form of participatory evaluation that initiates the conversation on expectations, objectives and impact already at the start of a project or initiative, either when the program or research design is co-created with different stakeholders or at the latest when the participation of actors is negotiated. The main difference between co-evaluation and conventional types of research evaluation is that participants are involved in the decision on project goals and evaluation instruments.

Objective of the webinar

Participatory evaluation is an approach that aims at giving voice to the stakeholders of an intervention in its evaluation design, process and results. This webinar will shed light on  the specificities of this methodology, as well as challenges and opportunities related to its application in citizen science. The aim of the webinar is to furthermore provide an overview on co-evaluation as a strategy and to discuss which respective approaches and options have been available for a long time in participatory research and citizen science, how they have been received, what opportunities they have opened up, what obstacles have been overcome, but also what we can learn from them for the future.

After an introduction and an overview about the state of the art of evaluation (participatory and non-participatory) in citizen science, core principles of co-evaluation will be presented. Experts will then discuss their experiences on a panel, with a special focus on how to approach participants as evaluators, current challenges in times of crisis and physical distancing, and resulting digital options for more participation in evaluation. 

This webinar is targeted towards researchers, evaluators, project designers, and communicators working in a participatory research and citizen science context. The objective of co-evaluation is not only to promote discussion and learning for the scientific dimension of a project, it should also promote a project’s impact including change in the living environments of project participants. Thus the discussions – for example how to best approach participants as evaluators -are useful to people involved in citizen science, program design, policy, and planning. 

Topics

Participatory evaluation in citizen science, co-evaluation, how to approach participants as evaluators, social impact

Summary of the webinar

The guiding principle of the panel was to bring together different horizons of experience. Because the majority of evaluation approaches in Citizen Science are still primarily top down and ex-post, we invited people from diverse fields to join the conversation on participatory evaluation with their experiences and share their particular perspectives on the issue. These backgrounds include programme evaluation, youth work and social work, philosophy of science, and citizen social science. With our panelists sharing how they co-design their evaluation activities, we wanted to highlight the already existing body of knowledge, including the various benefits and limitations they have already come across. 

Because an hour of discussion only allowed us to touch on the variety of important experiences, many interesting aspects were addressed only shortly and not in the detail owed to them. In the following, we summarise the central themes that emerged from the conversation, some of which we broadened with additional information and sources. 

Participatory evaluation and co-evaluation: Preconditions and aims

The panelists were united by the experience that participatory evaluation made both research processes and programme design more robust, but that it did not necessarily make it easier. Such evaluations require extensive preparation, commitment of time and resources, as well as a willingness to “get down to the nitty-gritty,” i.e., to open up science in such a way that feedback can be incorporated directly into the process. 

Furthermore, it is necessary to plan for capacity building, in the sense of creating a baseline of skills and a communication culture that enables participatory evaluation in the first place. Capacity building may include trainings, where participants learn about processes, methodologies, and about how to make sense of these in line with their own expectations and potential impacts. They may also be instructed in valuation processes, reflecting their values and norms in relation to the project and its objectives. Among other things, this has the double benefit of sensitising participants as well as the involved academic scientists to multi-perspectival approaches. It is also a way of addressing the fact that deliberative processes do not always lead to consensus, nor should they. As Johannes put it, integrating different standpoints and still moving forward with a process enables a “collective intelligence,” and in turn cooperation and collective action, that is not possible nor valued in a traditional research or evaluation process. Even more, such a deliberative approach directly contradicts the traditional scientific efficiency logic, as they take a lot of time, are highly complex, and do not necessarily end with consensus, or a fixed output for that matter. However, integrating different forms and formats of expertise and authenticity and being open to the diversity of actors means enabling their lived experience to inform a more comprehensive evaluation process, and in turn facilitate a democratisation of knowledge and more sustainable change. As Obialunanma pointed out, experience shows that the more stakeholders with varying backgrounds are involved in the evaluation, the more validity is ascribed to the results, while it also creates shared ownership of such processes and their outcomes. Stakeholders also bring invaluable field-knowledge to the table that otherwise would be inaccessible, which contributes to the overall quality of the process. In a similar vein, Johannes pointed out that, contrary to the disinterested, neutral, or objective ideal of science, it does matter who does the research.

Another aspect that feeds into the complexity of participatory processes in general and co-evaluation in particular is the question of how to deal with shifting expectations and evolving project goals in practice, as solutions need to be specific to the context they are employed in. One dimension of this balancing act is sensitising all participants – academic and non-academic – to existing power relationships, and to address such relations throughout the participatory process. As academic scientists and facilitators, it is imperative to create safe spaces for participation, to realise when to step back and let our participants take the lead, but also when we are needed to step back in, in a dynamic process much like a dance, as Katie called it. Power must be shared for a participatory process of any kind to be successful. Another, closely related dimension is building and nurturing trust between the diverse actors in the process. A carefully designed co-evaluation helps to create robust and trusting relationships, even if it takes time and resources to understand the scope and modalities of participation that each actor feels comfortable with. This also means introducing the concept of evaluation itself with care: Our panelists describe their encounters with scepticism towards evaluative practices, as participants thought they were being evaluated themselves. Thus, when participants become co-evaluators, it is key to explain how they may co-shape the evaluation process to help ease them in. It might also be good to use less loaded terminology, such as “reflection”, “impact design”, and so on. The question of what language to employ and how must also be considered more generally, as language might form a barrier to entry that excludes important stakeholders from a co-evaluation. The same holds true for methodologies, which must be chosen according to the specificities of the participants as well as the evaluation process. This is especially pertinent as current requirements regarding social distancing due to the pandemic necessitate many projects to reconsider approaches for digital spaces that were initially designed for physical interactions. This fundamentally recontextualises the digital divide as an obstacle to equal participation, when online activities are often the only interactions allowed. Thus, the question of how to reach populations that don’t have access, or feel less comfortable using digital technologies, needs to be considered. Answers might be to rely not only on digital communication, and if digital technologies are employed, to keep them low threshold and low bandwidth. Finally, the tools to be employed need to be carefully chosen, tested, and adapted or dropped where necessary and sensible. Anna, for instance, gave the example of sending out physical research diaries to collect participant inputs and bridge the digital divide. In any case, the quality of the interaction as well as the materials produced needs to be monitored closely when transferring activities intended for physical interaction to a digital sphere, and there should always be time and space for feedback

Generally speaking, achieving a trusting, respectful and sustainable collaboration is much easier if stakeholder engagement is continuous and sustainable, and sought from the very beginning of an endeavour. In this regard, it is also important to think about valuation and rewards for efforts spent in a co-evaluation. In terms of remuneration, this might mean providing “stipends” as recognition for both effort and time. Such contributions might enable participation in the first place, as it frees co-evaluators who have responsibilities as providers to their families, for instance. Other benefits that co-evaluation might bring include more usable and sustainable outputs that benefit a community, more visibility and stronger community processes, and the multiplication of efforts through the participants. However, harkening back to the shifting expectation touched on above, it is important to actively engage with expectations, hopes and needs that might arise from participatory evaluation activities. Otherwise, hard-earned trust might be damaged unnecessarily. 

Responding to a question from the audience, the panelists also discussed how best to establish participatory approaches to evaluation in Citizen Science and to gain more visibility and generate more recognition. Katie reported that the installment of a topical interest group (“TIG”) in the learned society helped a lot in that regard. Through TIGs, it was possible to organise sessions at conferences and with that bringing stakeholders from participatory evaluation exercises into the academic field to present their positions and experiences. In a similar vein, Obialuanma suggested to present evidence that participatory evaluation works, share best practices and through this capture the attention of the field. Johannes would like to see further visibility of participatory evaluation practices in the rest of science, as it is a very active field of research that gives answers where elsewhere there’s a lot of complaining. However, he advised not to expect too much, as Citizen Science and traditional research projects operate under very different logics. Furthermore, he points out that participatory evaluation makes sense especially for projects that have been co-designed. For other formats, such an approach would probably not be justified, since the necessary channels to collect feedback, for example, do not exist during the project. 

All the topics addressed here form new starting points for possible deepening in terms of operationalisation. We will take up some of them and examine them in more detail in the near future, for example in further workshops in the context of Citizen Science conferences. (coming up: Citizen Science Association workshop series in May 2021).

Questions to ask when designing and implementing a participatory evaluation

  • How can we best create environments for deliberative processes to tap into “collective intelligence”?
  • How do we ensure the dialogues remain open, inclusive and fair? 
  • How do we design the participation so that also marginalized voices can take part?
  • How to best monitor and use the shifting expectations and the evolving project goals in co-created settings? 
  • How to best incorporate feedback into the process? 
  • How to best systematise the many different forms and formats of input from co-evaluation?
  • How is the quality of the approaches affected by going digital?
  • Which are the best tools to employ, both offline and online?
  • Digital divide: how to be inclusive by not relying solely on digital communication?

Speaker biographies

Participatory research with young people is important because young people are experts in their lives, and their lived experience can and must shape knowledge developed about them and their communities.

Katie Richards-Schuster, Ph.D., is an Associate Professor and Director of Undergraduate Minor Programs at the University of Michigan School of Social Work in Ann Arbor, MI, USA.  Her research focuses on understanding the strategies and approaches for engaging young people in communities, the contexts and environments that facilitate youth engagement across settings, and the impact of youth participation in creating community change.   She has worked in and with communities to promote youth participation and has led national and global efforts to increase youth voice in research and evaluation.  She is a leading scholar in using participatory research and evaluation approaches with young people and communities and is the former co-chair of the Youth Focused Evaluation TIG within the American Evaluation Association. 
https://ssw.umich.edu/faculty/profiles/tenure-track/kers

I realized early that having (all) stakeholders contribute to designing and implementing programme M&E systems leads to better understanding of the intervention, strengthens ownership, improves accountability and gives voice to the most vulnerable. The stakeholders own the process and are “Champions” in its implementation.

Obialunanma Nnaobi is a development practitioner whose work combines elements of research, strategy and advocacy to support good governance causes, innovative use of technology and the empowerment of women and youth. As Co-founder at Vilsquare, she works with a wide range of partners to deliver on pan-African solutions to the continent’s infrastructural challenges. She has held key positions in multi-stakeholder initiatives in Nigeria like the Open Government Partnership (OGP) where she supports diverse stakeholders to collaboratively achieve shared accountability objectives and development targets. Twitter: @nmannaobi @vilsquare
https://vilsquare.org/makershub/

We must move away from metric madness, from our obsession with outcomes, towards a process-oriented form of evaluation that is tightly integrated with teaching, mentoring, and facilitation.

Johannes Jaeger is an evolutionary systems biologist and philosopher. He is interested in developing a theory of knowledge that is tailored to open science, inspired by his work on organismic agency and innovation in biological evolution. He is the current D’Alembert Research Chair at the Université Paris-Saclay and the Institut d’Études Avancées (IEA) de Paris, and associate faculty at the Complexity Science Hub (CSH) Vienna. Twitter: @yoginho
www.johannesjaeger.eu

Considering evaluation as an integrated research activity and establishing a structured dialogue with participants since the very beginning beyond objective and quantifiable measures is crucial to build trust and mutual understanding, and thus a reflective evaluation capacity.

Anna Cigarini is a PhD candidate in information and knowledge society at Universitat Oberta de Catalunya. She is a member of OpenSystems (Universitat de Barcelona) in the CoAct project, and collaborator at Dimmons (Universitat Oberta de Catalunya). She holds a MsC in sociology and demography, a MsC in population studies and a BsC in statistics. Anna is interested in the intersection of the technical and social aspects of technology. In particular, her research focuses on the governance of citizen sciences’ communities of practice.
Twitter: @anna_cigarini, @OpenSystemsUB, ‎@dimmonsnet

If we want to take co-design seriously we also have to take co-evaluation seriously.

Barbara Kieslinger is a senior researcher and project manager at the Centre for Social Innovation in Vienna, Austria, ZSI. Since 2012 Citizen Science has been a topic of research for her, next to the relation between technological and social innovations. Barbara coordinated large research projects dealing with innovations in workplace learning and was recently involved in projects related to digital social innovation and the maker community. Barbara currently coordinates an EC-funded project on open healthcare, which facilitates co-design of open healthcare for people with physical limitations. Barbara also serves regularly as external expert for the European Commission and reviewer for scientific journals and has recently been elected as part of ECSA’s board of directors.  Twitter: @bkieslinger

In citizen science we need evaluation that matters.

Katja Mayer is a sociologist at the University of Vienna, Austria, who works at the interface of science, technology and society. Her research examines the interactions between social science methods and their public spheres, focusing on the cultural, ethical and socio-technical challenges at the interface of computer science, social sciences and society. In addition, she is Senior Scientist at the Center for Social Innovation in Vienna, ZSI and Associate Researcher at the University of Vienna’s ‘Governance of Digital Practices’ platform. Twitter: @katjamat

Teresa Schäfer studied Economics at the University of Vienna. She is senior researcher at ZSI and focuses her work on participation processes in digital social innovations and the assessment of their impact. Teresa has been leading the consultation process for the development of the Citizen Science Whitepaper for Europe and is work package leader for evaluation and impact assessment in several citizen science projects (e.g. CAPTOR, CoAct, EU-Citizen.Science). Teresa has many years of experience in participatory methods for design, evaluation and impact assessment, involving a broad range of citizens, like retired people, or migrants, in research projects in the 6th/7th FP and H2020.

Further resources

Webinar Documentation PDF (incl. references)

Webinar Slides PDF (Zenodo)

Webinar Recording

Webinar Transcript PDF

Zotero Group “Co-Evaluation in Citizen Science”