Have a personal or library account? Click to login
A Conceptual Framework for the Operationalisation of Cooperation Analytics in Citizen Science Projects Cover

A Conceptual Framework for the Operationalisation of Cooperation Analytics in Citizen Science Projects

Open Access
|Jul 2024

Full Article

Introduction

The aim of this article is to define cooperation and to build indicators that allow us to measure the concept of cooperation quantitatively, based on a systematic literature review, specifically for citizen science projects taking place in the field of social sciences and humanities (SSH). We put forward the argument that eliciting the practices of cooperation can contribute greatly to understanding the benefits of citizen science practices for reciprocal knowledge production, that is, for both academics and civil society.

Cooperation has been coined as a notion that allows us to better understand how citizen science in SSH works, beyond the normative notion of “participation” (Göbel, Mauermeister, and Henke 2022). “[Cooperation] provides a more nuanced account of the actors, their activities and interrelations, including ones that are not accounted for, so far” (Göbel, Mauermeister, and Henke 2022). It is broadly used to qualify “a new era characterized by the cooperation of amateur and professional scientists” (Peters 2020). Yet the notion is still underexplored, and is neither defined precisely, nor measured, in the citizen science literature (see Göbel et al. 2019; Vohland, Weißpflug, and Pettibone 2019). The Citizen Science Network Austria sets “collaboration” as one of the twenty quality criteria for citizen science projects (Heigl et al. 2018; 2020). At the Austrian Citizen Science Conference, Heinisch and Seltmann (2018) referred to “cooperation” as the main concept for discussing practices between citizen scientists but used it interchangeably with “collaboration” without providing a definition. In 2015, when the European Citizen Science Association (ECSA 2015) established the ten principles of citizen science, “collaboration” was not yet among them. This shows the increasing attention that is being given to measure quality criteria and the relevance of the notion of collaboration, and more recently, to cooperation. Various other terms, instead of “cooperation,”, are often used to understand citizen science practices, such as “engagement,” “collaboration,” “participation,” and “governance” (Kullenberg and Kasperowski 2016), such that it is “difficult to identify the terms revolving around citizen science literature” (Kullenberg and Kasperowski 2016). Theoretically speaking, “citizen science has different meanings” (Kullenberg and Kasperowski 2016), which differ from “the meanings produced in practice” (Morillon 2021). Projects are “organized in various scientific disciplines” (de Vries, Land-Zandstra, and Smeets 2019) and therefore their contributions are hard to analyse at a general level.

First, we present the relevant literature for the evaluation of practices in citizen science and the theoretical framework of conventions that guided our broader research scope of implementing indicators into a digital platform. Second, we justify the methodology of our systematic literature review according to this research scope. Third, the findings are structured into four macro-indicators—asymmetry, diversity, formality, and intensity—that emerged from our qualitative analysis of a systematic literature review, and which allowed us to define 21 features, defined in Supplemental File 1: Appendix A with their respective indicators. Finally, the discussion centres around the definition of cooperation as an on-going learning process. We conclude with the main relevance of building indicators from a conceptual approach to establish new conventions in citizen science.

Evaluation of citizen science projects

Given the diversity of definitions, it is also difficult to establish indicators that enable citizen science practices to be measured across projects. This challenge has been highlighted in the literature at two levels: in the management of citizen science projects and in policymaking.

Göbel et al. (2019, p.13) explain that there are multiple “modes of governance” demonstrating how citizen science puts activities into practice with multiple groups of people, technologies and standards, which lack “evaluation criteria.” Therefore, this diversity can create a tension between generic and specific criteria. “There are currently no commonly established indicators to evaluate citizen science, and individual projects are challenged to define the most appropriate road towards collecting evidence of their impact” (Kieslinger et al. 2017). Yet, “evaluation” or “impact assessment” criteria for citizen science have been developed (see Parkinson et al. 2022; Passani, Janssen, and Hölscher 2021), which may fall into “implicit normative assumptions […] along an axis of what is good or bad” (Cornwall 2008). Indeed, “in the evaluation of citizen science projects, inputs, activities, and outputs are usually easy to measure with quantitative indicators that show the success, or not, of project management” (Schaefer et al. 2021, p. 496). The emphasis on the success of the project guides the evaluation criteria to be developed, with a view to improving the management process itself. This evaluation criteria can be applied at different stages: before, during, and after the project. For such evaluations, projects mainly rely on the stakeholders’ self-reported data, surveys, and interviews (Schaefer et al. 2021). While new approaches of “co-evaluation” are emerging (Schaefer et al. 2021; Albert et al. 2021), these kinds of approach remain rather vague when quantifying the ongoing practices, that is, while the project is still being developed, to provide feedback to stakeholders. The fifth principle set out by the European Citizen Science Association (ECSA)—“Citizen scientists receive feedback from the project”—is often neglected in practice, and when given, it is addressed through evaluation criteria once the project has been completed (West and Pateman 2016). Indeed, Davis et al. (2022) show that in the evaluation literature within the citizen science field, data provided for analysis are self-declared by the team members and project leaders, mainly in surveys. The indicators are therefore evaluating practices ex post, once they have been performed. Providing feedback after the completion of the project, and not while the project is ongoing, diminishes the opportunities of learning from it. As Bradbury and Reason (2003) suggest, “the quality of participation must be evaluated on an ongoing basis;” as project interests and outsets evolve through time, actors are often confronted with changes.

At the policy level, the scientific and funding opportunities in citizen science projects enable the engagement of new stakeholders in the knowledge production process, both inside and outside scientific laboratories. This type of engagement has a long history. The initial steps of scientific practice occurred under the scrutiny of citizens, although it was only a limited number of “enlightened citizens” who participated, as Sciences and Technology Studies (STS) (Albert et al. 2021) have demonstrated. The scientific revolution gradually evolved towards a “confined version of science,” restricted to laboratories and publications (Latour and Callon 1991), where a specific style of debate could occur and be assessed, in contrast to the open debates of politics and public opinion, which have increasingly involved new media in recent years.

In contemporary practices of citizen science, engagement is encouraged, as the interaction of different social worlds (civic, academic) promises changes in policy-making (Göbel et al. 2019) and direct impact on society and innovation. However, we are still exploring “how we can monitor the contribution of citizen science to global societal challenges and the EU mission” (COACT 2022). Given the diversity of practices that characterises citizen science projects (Haklay et al. 2021), there is a general call to make citizen science accountable by “establishing a robust and approved scientific method that contributes to further increase in scientific knowledge […]. Actors in citizen science—including policymakers, funding agencies, scientific communities, and practitioners—need to make transparent what they mean when talking about citizen science” (Haklay et al. 2021, p. 26).

New conventions for cooperation in a citizen science platform

The citizen science requirement of “establishing a robust and approved scientific method” (Haklay et al. 2021) can be explained within the framework of the “economy of conventions” (Eymard-Duvernay 1989). A convention (Eymard-Duvernay 1989) is understood as an “investment of form” (Thévenot 1986), which is a long-term activity to induce stakeholders to accept a shared framework of principles, ontologies, decision-making processes, devices, or metrics, that will become acceptable for the new entrants once the convention becomes widely shared. Conventions require a continuous “revision” (Livet 1994) of the stakeholders’ practices and of their understanding so the knowledge produced collectively is recognised as shared and common. When conventions become explicit, for instance in the form of indicators, practices are made transparent and accountable. All standards, procedures, rankings, labels, and legal frameworks of any kind are examples of conventions, that took time to be fabricated, discussed, agreed upon, and sometimes encapsulated in devices and metrics to make them more “natural” in everyday transactions. An accountability process in citizen science that would become a convention among all engaged stakeholders and would enhance the robustness of scientific methods is now required.

This article proposes a conceptual approach through a systematic literature review to define the features of cooperation in citizen science projects. The aim is to build a set of indicators from such a definition that help demonstrate the added value and relevance of citizen science practices to the actors themselves, who are participating in knowledge production processes under evolving standards and seeking to influence policymaking according to the communities’ goals. We call “cooperation analytics” the operationalisation of indicators, centred on qualifying and quantifying cooperation to support the empirical development of calculations within a platform and the display of a specific feedback. The originality of this article lies in the operationalisation of the cooperation definition, which can be implemented technically in an open-access platform for the benefit of the citizen science community. Cooperation analytics forms part of a bigger research programme, the H2020 project, “Collaborative Engagement on Societal Issues (COESO),” including the development of a digital platform (VERA 2023), where the cooperation indicators can be integrated. Cooperation analytics will provide continuous and direct feedback to 10 pilot citizen science projects, to contribute to the reflexivity of the stakeholders engaged, that is, to reflect on the assumptions that we make when producing knowledge. In that sense, the article’s contribution is double, theoretical, and practical, as our conceptual approach re-introduces and defines cooperation with specific indicators that are deployable in any citizen science infrastructure, such as VERA. The operationalisation is justified as citizen science platforms are increasingly being developed and offer new opportunities as decision-supporting tools for cooperation (Heigl et al. 2020; Freitag 2016). The framework that we proposed and tested (Pidoux et al. 2022) must be considered as a contribution to the new conventions that are being established by the actors themselves in citizen science projects, for the sake of the quality of their contributions and cooperation. In contrast to existing evaluation methods in the citizen science field, these indicators must be provided to the stakeholders while the project is ongoing, so that the evaluation operates as quasi-real-time feedback on their learning process.

Methodology

Our methodology consisted of a systematic literature review for the conceptualisation of cooperation and its features, in order to build quantifiable indicators from those concepts.

Search strategy and eligibility criteria

We follow the “recursive searches” (Kullenberg and Kasperowski 2016) method in the citizen science literature to conduct a systematic literature review. Recursive searches are a form of qualitative review of research terms in online databases. The searches are improved by increasingly adding sources with the use of inclusion and exclusion criteria. To “recursively include more search terms is a process called ‘snowball searches’” (Kullenberg and Kasperowski 2016). We searched for the following keywords: “cooperation,” “collaboration,” “citizen science,” “participatory science,” “engagement,” and “co-creation.” Terms were searched across electronic databases, as recommended by Braz Sousa et al. (2022). The cross-platform review aimed to identify peer-reviewed scientific literature and other English-language, publicly available, relevant reports that included keywords that were closely related to the concept of cooperation in citizen science practices.

First, we browsed the Citizen Science: Theory and Practice journal, then expanded to relevant SSH journals on the study of cooperation. When seeking more formalised and computable approaches in the SSH literature, the concept of cooperation in management is apparent, in particular in the organisational learning field, in economics, in particular in Game Theory, and more broadly in communication science, education, and sociology. In addition, in the Computer-Supported Cooperative Work (CSCW) community (where interdisciplinary literature mixes SSH and computer science), cooperation criteria are quantified and measured for the design of digital platforms that mediate users’ practices. Other fields explored were Sociology of Work, Sociology of Organisations, Human-Computer Interaction, Ethnomethodology, Sociolinguistics, Creolization Knowledge, Translation Studies, Science and Technology Studies (and Scientometrics). Second, we conducted a final search of keywords in “Google scholar” given the platform’s broad coverage on scientific literature.

To date, there are 258 articles in the Citizen Science: Theory and Practice journal that mention “cooperation” without providing a conceptual definition (see Goldin, Suransky, and Kanyerere 2023). In other journals, there is a generalised lack of quantifiable indicators pertaining to cooperation in citizen science. Consequently, rather than focusing on building a database of articles to review like previous authors (Kariotis et al. 2022), we decided to focus on a selective qualitative review. We paid particular attention to communication science journals and CSCW for its relevance in defining cooperation and using quantitative methods that are appropriate for digital platforms. It is important to acknowledge the limitations of reviews as “there is no standard method […], with many studies adopting and omitting different elements of a systematic or scoping review method to meet the needs of their research question and context” (Tricco et al. 2015 in Kariotis et al. 2022). Therefore, we conducted a qualitative literature review by screening articles that fall within the scope of our conceptual approach—an approach that seeks to technically operationalise cooperation and its features within the development of a platform for the COESO project. The screening was conducted systematically in accordance with exclusion and inclusion criteria, following a two-stage process as suggested by Kullenberg and Kasperowski (2016).

Screening was conducted separately by the two co-authors of the article between July and August 2021, with a final update in January 2022. First, we filtered the references by three conditions: i) an indicator of cooperation was evoked; ii) an element of cooperation to be assessed had been identified while the project is still under development; and iii) the use of technology is presented. Finally, after observing the low frequency of cooperation recurrence related to citizen science projects and the predominance of qualitative methods to analyse citizen science practices, our literature review was refined into three more inclusive selection criteria: i) terms related to cooperation (i.e., “participation,” “collaboration,” “engagement”) appear in the literature; ii) indicators for those terms are established with or without concrete operationalisation; and iii) the use of technology is evoked or not.

Dataset analysis for computational operationalisation

The researchers reviewed all sources individually. The references were then discussed collectively in September 2021, to agree upon selected features of cooperation and their definitions that could be operationalised in a digital platform. We extracted 36 cooperation features that consider a plurality of cooperation practices and defined them according to the literature review. Finally, we conducted an inductive thematic analysis by agreeing on bottom-up emerging themes that represented commonalities across the features in the literature and that aligned with the scope of our research (i.e., to operationalise indicators). As a result, the emerging themes allowed us to group cooperation features according to four “macro-indicators:” asymmetry, diversity, formality, and intensity.

The operationalisation of concepts into indicators was conducted by the two sociologist researchers after the literature review, between February and April 2022. Then, the concepts were discussed with a technical team (a data scientist, computer engineer, and expert in natural language processing). The process consisted in translating every concept into a quantifiable indicator under two conditions: i) the necessary data can be collected from a digital platform that citizen scientists use daily, and ii) the indicator can be implemented for calculations in a digital platform. These conditions meant that technical requirements, such as server management, memory, and personal data protection, were a constraint. Consequently, an initial list of 36 cooperation features (Boullier and Pidoux 2021) was reduced to 21. Supplemental File 1: Appendix A details the cooperation features retained according to the four macro-indicators. Each feature is defined according to a combination of relevant references in our literature review. Finally, each feature is translated into indicators that enable cooperation to be measured. The indicators later required a feasibility test on empirical data (Pidoux et al. 2022). The feasibility requirement is a key dimension of translating concepts into quantifiable data as it is experienced by those social scientists who want to conduct a robust retrieval of data to test and validate their conceptual assumptions (Alvarez 2016). Even though we tested many proxies of basic criteria for cooperation, many of the important dimensions of cooperation that were extracted from the literature were finally discarded because of the technical non-feasibility of computing the indicators. These decisions rely on a reduction process that all quantification methods (Desrosières 2014) accept and control (provided that they account explicitly for it).

Findings

The findings are structured into two sections that define four macro-indicators according to the literature review. The cooperation features extracted from the literature are classified by every macro-indicator in Appendix A for easy readability, and not according to the order in which the references appear in the literature review.

Macro-indicators: asymmetry and diversity

The first macro-indicator is asymmetry. Asymmetry emerges from the contrasted practices in citizen science research: “a group of individuals [can] be deeply involved in the entire process of research while others participate in discrete activities such as data collection or analysis” (Farquhar and Wing 2008 in Shirk et al. 2012). Indeed, participative “initiatives arise in unique contexts, in response to different needs, meaning prescribed approaches are unreasonable” (Wiggins and Crowston 2011). In such contrasted practices, tensions arise. Stakeholders in citizen science seek to combine the goals of scientific projects with the interests of the community and public policy. However, as noted by Groom et al. (2019), the working relationship between researchers, policymakers, and citizen scientists is often not strong in practice, and their approaches to producing the quality of results expected by each party are not always aligned.

Asymmetry also emerges from the fact that in the literature, evaluations of contributions are often unidirectional: academics are the ones that assess the level of citizens’ participation. Researchers mainly investigate the benefits of scientific practice for citizens in order to sustain their volunteer participation (Lopez 2021). Consequently, as qualitative research has shown through different models of citizen science governance (Göbel et al. 2019), the benefits of these practices are not considered to be bidirectional. One author who accounts for “reciprocal forms of engagement” from the citizen perspective is Millerand (2021). The term “engagement,” as defined by Millerand (2021), is relevant for understanding citizen scientists’ cooperation as “a process made up of actions and experiences that contribute to giving meaning and building an identity of an “engaged” or “committed” person over time” (Millerand 2021). Three types of engagement are defined as: i) the scientific citizen: “a citizen in capacity, involved, able to contribute;” ii) the volunteer citizen: “the one who gives, executes following the rules that the researcher will have validated;” and iii) the militant citizen: “who is committed to the service of a cause.” While there are no specific indicators in Millerand’s (2021) work, the contrasting practices used by various types of stakeholders allowed us to extract cooperation features numbers 2, 9, 13, 17, 20, and 21 (Supplemental File 1: Appendix A) to measure asymmetry.

The second macro-indicator is diversity. Diversity emerges from diverse types of participation, and profiles of stakeholders identified in the literature on citizen science practices. Millerand (2021) distinguishes engagement from expertise and sets three types of participation between academics and citizens that enhance the diversity of everyone’s contributions: “i) the classical model where responsibility is assumed by a certified scientist; ii) the Wikipedia model based on the volume of contributions coupled with compliance with contribution rules; and iii) the hybrid model involving delegations of responsibility in certain fields of expertise” (Millerand 2021). Shirk et al. (2012) analyse another form of diversity related to practices in ecology projects and define five models of public participation in citizen science scientific research: “contractual projects, where communities ask professional researchers to conduct a specific scientific investigation and report on the results; contributory projects, designed by scientists and for which the public primarily provides data; collaborative projects, designed by scientists and for which the public provides data but also help to refine project design, analyse data, and disseminate findings; co-created projects, designed by scientists and the public working together and for which at least some of the public participants are actively involved in most or all aspects of the research process; and collegial contributions, where individuals [without academic affiliation] conduct research independently with varying degrees of expected recognition by institutionalised science or professionals” (Shirk et al. 2012).

To Shirk et al. (2012), the fact that multiple “participatory practices […] are highly relevant when defining a balanced participation structure” guarantees the quality of results. However, in their study, the quality is limited to the comparison between the inputs provided and the outputs achieved by each actor, in addition to the impact that they have on public engagement. In other words, who does what for what purposes in the participatory platform, without specifying indicators.

Stonbely (2017) describes six types of collaborative journalism: “Temporary and Separate,” “Temporary and Co-creating,” “Temporary and Integrated,” “Ongoing and Separate,” “Ongoing and Co-creating,” and “Ongoing and Integrated.” These types provide two relevant general criteria to retain: the temporal dimension and the type of collaboration, which may be writing the project, problematizing, collecting data, processing data, and/or final production through an empirical case (Chibois and Caria 2020). This is complementary to the evaluation and typification of citizen science projects by Haklay et al. (2021), which propose 10 factors for classifying citizen science and citizen science related activities: “cognitive activeness, compensation, purpose of the activity, purpose of knowledge production, professionalism requirement, training level expected, data sharing, leadership, scientific field, and involvement of the participants.” It is important to highlight that “typologies of participation and project design are best considered tools for understanding trends, as practice inevitably ‘blurs boundaries’” (Cornwall 2008). “The blurring of boundaries [between types] is in itself a product of the engagement of a variety of different actors in participatory processes, each of whom might have a rather different perception of what ‘participation’ means” (Cornwall 2008, p. 274). Moreover, while the literature identifies diverse forms of participation, the different profiles of “participants who take part in community development projects” are often unclear (Cornwall 2008, p. 275). Göbel, Mauermeister, and Henke (2022) extend the literature by describing diverse “‘additional contributors’ under several institutions that are not usually accounted, i.e. ‘invisible partners’ and actors that are not considered official partners in the project, i.e. ‘silent partners.’” Based on this literature, we designed cooperation features numbers 1, 2, 6, 7, 8, 10, 11, 12, 13, 15, 16, 17, 18, 19, 20, and 21 to measure the diversity of stakeholders and their contributions (Supplemental File 1: Appendix A).

Macro-indicators: Formality and Intensity

The third macro-indicator is formality. Formality emerges mainly from the literature in management and communication science. To our knowledge, only one reference states a vague definition of cooperation in citizen science: “In a very broad sense [cooperation] refers to ‘doing something together’ or ‘working together toward a shared aim’, i.e. joint or coordinated action of all sorts by both individuals and organisations” (Göbel, Mauermeister, and Henke 2022). Their definition contributes mainly to addressing the diversity of activities and contributors that we considered in our previous macro-indicator.

Josserand (2004) approaches “cooperation within communities of practice” where “communication has direct implications for the dynamics of collaboration” (Jamali, Khoury, and Sahyoun 2006). Koster et al. (2007) suggest qualifying cooperation through “task interdependence and the informal network content.” The former refers to the “job descriptions of employees, that is dependent upon the person’s formal position and the technology used.” The latter refers to “personal relationships between members, independent from the position they have and from the tasks to accomplish” (Koster et al. 2007). We extracted the relevance of formalising communicative practices, the relationships that structure dynamics, and the content that circulates through networks in order to build a new set of indicators of formality.

Le Cardinal, Guyonnet, and Pouzoullic (1997, p. 79) define cooperation as a “communication process” that can be grounded in trust, contractual agreements, or a mix of both, offering a framework to distinguish different types of contributions within a project. When participants engage in a project, their cooperation may be driven by trust—which involves implicit practices and relational dynamics that foster problem-solving, task allocation, and management of project complexities—or by contractual agreements, which are explicit and detail tasks, responsibilities, evaluations, deadlines, and remunerations. This distinction leads to the macro indicator of “formality,” where trust-based cooperation reflects a dynamic and adaptive process, while contractual cooperation defines a more structured and static framework, each influencing how contributions are made and assessed. The main limitation of the approach proposed by Le Cardinal, Guyonnet, and Pouzoullic (1997) is that the attention to the on-going process of cooperative work is underestimated compared with the pre-setting of conditions at the start of any project.

In the sociology of communication, one potentially quantifiable method is the identification of common references of stakeholders involved in citizen science. Morillon (2021) suggests an epistemological analysis of participation based on documents and communication exchanges (reports, minutes, emails) between researchers and citizens. Morillon (2021) shows the relevance of communication processes according to four epistemological families (positivism, interpretivism, interactionism, constructivism), which are empirically anchored—thereby highlighting the plurality of social worlds that interact in citizen science. The method for identifying and selecting such terms is not presented. The analysis remains qualitative and based on surveys in which stakeholders are requested to define a posteriori, the common references that were identified by the researcher. However, Morillon’s (2021) communication analysis contributes to highlighting the relevance of analysing references produced by actors within continuous textual practices. In contrast to the classical model of communication (information transmission/reception), when communication analysis is approached as an ongoing learning process, we can grasp collective intelligence according to the common-sense knowledge built in situ by actors (Morillon 2021).

Communication can be framed as “an exchange of knowledge to produce science” (Boullier 1984), where social actors and the information flow are not necessarily aligned. The tension between divergence and convergence is considered as a structural feature of social life as a human capacity (Gagnepain 1994). Even though people pretend to aim for cooperation, they face conflicting situations in which some stakeholders will look for distinction, personal recognition, control, or benefits to the detriment of the collective. This behaviour should not be disqualified, but rather described and accepted as the dynamic tensions of social life, and a real-life environment. This tension has an impact on three cognitive capacities of human beings. First, their linguistic capacities will be affected by this social tension at the levels of language and distribution of roles for the production of knowledge. The tension can be found between more idiomatic expressions to the point of using a quasi-jargon versus the search for a common language, a koinè. Second, the tension will apply to the way human beings use their technical capacities, where one person may look for a specific differentiation while another will look for standardised tools. The division of labour itself is organised with more or less specialisation. It results in a social way of designing technologies. Finally, human beings’ moral capacity—the normative regulation of our behaviours based on our will and desire, and the ability to autoregulate our impulses—is also affected by this social tension. Some will try to let their impulses speak and pretend to be free from all regulations, while others will seek to comply with collective norms. The question of who is in charge of these decisions and the design of these norms is also a social challenge that facilitates the social tension between divergence and convergence. This social dynamic is permanent and contradictory. Hence, a comprehensive cooperation model in citizen science should not be normative or produce a standardized ranking, but should be formalised according to the various ways of creating a liveable space for handling these tensions. From this literature, three cooperation features—numbers 3, 4, and 5—are extracted to measure formality (Supplemental File 1: Appendix A).

The fourth macro-indicator is intensity. Intensity emerges mainly from the definition of quantifiable criteria in the CSCW literature, with additional qualitative criteria considered. Of major relevance for our research scope are the CSCW scholars Liu, Laffey, and Cox (2008), who operationalised cooperation measurements into specific digital platform functionalities. Previous studies (Liu 2008) mainly used survey methods where cooperation was measured by asking users to rate cooperation criteria numerically. For instance, how actors evaluate compatible and mutual goals in their work, how they perceive the cooperation and supervision of their peers, as well as conflicts and suggestions discussed within teams (see Tjosvold and Tsao 1989; Tjosvold et al. 2004; Bacon and Blyton 2006; Sinclair 2003 in Liu 2008). In contrast, Liu (2008) combines qualitative and quantitative methods to collect data and analyse them within a platform in the health sector. However, Liu, Laffey, and Cox 2008 principally account for action logs obtained from the platform: mainly reporting and resolution activity, and content analysis where data were categorised by coders. For designing the intensity macro-indicator, we retain the following elements: individual actions in platforms such as time spent on different actions in the platform, number of total actions taken for an event, total characters in event details, and suggestions given to others.

Khawaji et al. (2013) extend this study by analysing trust and cooperation using natural language processing (NLP) techniques in addition to individual actions. The authors analyse whether the establishment of trust is associated with linguistic cues. The hypothesis is that text can enable researchers to identify the friendly, respectful and, in contrast, competitive attitudes of actors towards others while undertaking tasks together. Here trust is defined in terms of positive attitudes that enable cooperation. We retain the use of NLP techniques for analysing communicative interactions between stakeholders.

The concept of the prisoner’s dilemma (and the game theory of which it is part) have inspired extensive research into cooperation, including Axelrod’s book, originally from 1984, The Evolution of Cooperation (2006). While the evolutionist discussion of cooperation is not primarily relevant to this article, the extensive use and documentation of the prisoner’s dilemma is useful in citizen science activities. Two features can be mentioned (Axelrod 2006). First, the duration and the repetition of turns in the prisoner’s dilemma game is a significant feature of the experiment. When stakeholders have to choose between cooperation or defection in order to place a betting odd, they will usually choose defection; while this appears more rewarding at first, it means they will not have an opportunity to learn from each other’s behaviour. But at the point at which an indeterminate number of turns are announced, the learning process can take place and stakeholders engage in a strategy of anticipation where cooperation may become valuable. For citizen science, this means that a one-shot project, with no chance of being extended or reproduced, creates more incentives for defection and fewer for cooperation. Second, the best long-term strategy is the tit-for-tat strategy, which means reproducing the behaviour of the partner at each turn. Adopting a cooperative attitude is much more conducive to sharing the rewards, and such an attitude can be reproduced in the long term until a defection occurs, which should also be reproduced by the other partner. The defection generates a common learning process about the consequences of defection and its cost. However, the best advice is not to defect first, and to wait for a challenge from the other before changing strategy. This can become meaningful for cooperative projects in citizen science in which stakeholders do not know each other and have many different expectations. Choosing to anticipate cooperation seems to generate the best chance of obtaining the cooperation of all parties.

Beyond individual actions, collective actions are accounted for by Góngora Y Moreno and Gutierrez-Garcia (2018), to distinguish cooperation from competition based on agent-model simulations. A key indicator is “cumulative modification of agents’ profiles.” Here, the notion of modification through time is relevant for citizen science. The modification notion is related to the length of projects: “citizen science activities and projects can range from an activity that happens only once (one-off), over a short-term (a few days or weeks), infrequently (once a month or less) and/or long-term (every day and/or over a long period of time)” (Haklay et al. 2021). From this literature, we extracted one additional cooperation feature to measure intensity: number 14, whose richness allowed us to propose multiple indicators (see Supplemental File 1: Appendix A).

Discussion: Definition of Cooperation as a Learning Process

Based on our literature review, we define cooperation as a learning process in communication that can be measured in citizen science practices. We share Liu’s (2008) view that “cooperation encourages the sharing of information, the negotiation of meanings and the construction of common knowledge.” This common knowledge must be assumed as shared by all stakeholders and, as communication theory (Le Cardinal, Guyonnet, and Pouzoullic 1997) shows, cooperation is the result of a learning process and not a starting condition, which was the main limitation identified in the CSCW literature. In that sense, the revision (Livet 1994) of shared understanding is a continuous process, while conflict—in languages, technical styles and norms—is a permanent feature of all collective behaviour (Gagnepain 1994). As Livet (1994) states, this revision process is key for cooperation, which relates more directly to the capacity of actors to learn through action, as they revise their knowledge continuously while interacting with others to follow the same course of action. Consequently, cooperation should be regarded not only as a resource but also as a key aim, distinct from the objectives of competition and conformity.

Cooperation in citizen science can be quantified across dimensions such as asymmetry, diversity, formality, and intensity, which together we call “cooperation analytics,” by identifying and measuring key aspects of its practices and meanings. For example, to measure asymmetry, we suggest using the “Knowledge Exchange Orientation” feature (see Supplemental File 1: Appendix A). This involves analysing how new terms are introduced during communication exchanges and assessing whether the balance between disciplines changes with the introduction of novel concepts. This feature examines the contributions of various members, noting that some may contribute significantly while others might follow or replicate emerging trends. A balanced distribution of contributions can foster enhanced cooperation. Potential indicators for this feature include the “Degree of Knowledge Distribution Balance,” which detects new concepts according to their scientific disciplines, and identifies who introduces or reproduces these terms (refer to Supplemental File 1: Appendix A). Thus, cooperation analytics not only quantify the explicit elements of cooperation but also reduce the tacit aspects of interactions and contributions among citizen scientists, thereby increasing the accountability of citizen science.

Conclusion and Future Research

This article has advanced the conceptual understanding of cooperation as a learning process through a systematic literature review. Moreover, we have systematically extracted 21 cooperation features from the literature, detailed in Supplemental File 1: Appendix A, which we have defined and translated into quantifiable indicators, so that they become operational in digital platforms. The measurement of cooperation according to four macro-indicators—diversity, asymmetry, formality, and intensity—was collectively termed “cooperation analytics.” The aim was for these indicators to facilitate ongoing assessment among citizen scientists, marking a shift towards a more dynamic and accountable form of citizen science where the stakeholders benefit directly from cooperation feedback by assessing their own practices.

The diversity of stakeholders’ practices identified in the literature (Heigl et al. 2020; Haklay et al. 2021; Göbel, Mauermeister, and Henke 2022) suggests that citizen science is still evolving to establish rules and principles for cooperation, particularly within the social sciences and humanities, where conventions are less formalised. We know that establishing conventions is a long-term activity to induce stakeholders to accept a shared framework of principles, ontologies, decision-making processes, etc. (Eymard-Duvernay 1989), and this issue is not new, as STS have shown throughout the history of citizens’ participation in science (Latour and Callon 1991). A convention requires transcending mere best practices; it gains robustness when embedded within legal procedures, institutions, quality controls, standards, devices, and metrics. Therefore, the diversity and evolving practices of citizen science were captured in this article under a set of features and indicators of cooperation that formalise and characterise contemporary forms of citizen science practices.

For citizen science to gain recognition for its scientific validity, it must demonstrate not only the quality of its results but also the reliability of its methods and, importantly, the added value of cooperation with citizens. This is why, as we have demonstrated here, measuring cooperation is crucial to shaping a new convention of science production. Building indicators that have a strong theoretical framework and that relate to the diversity of meanings that stakeholders give to their practices will enable accountability through enhanced traceability. Moreover, these indicators can help stakeholders reflect on their own practices and the assumptions we make when producing knowledge, which is essential for cooperation between citizen scientists that come from different social worlds.

We do not pretend indicators can cover all dimensions of cooperation. Quantification methods require a reduction of conceptual approaches. In future research, this reduction process can be validated mainly from the stakeholders’ feedback. It is the stakeholders who can evaluate which indicators should be included or adapted to enable them to better understand the feedback they receive from the indicators. Then, cooperation analytics could be embedded into digital platforms, like VERA under the COESO project, to enable continuous feedback on learning processes, thereby cultivating a more engaged scientific community. Any such future research would reinforce this article’s contribution to establishing new conventions in citizen science, and to ensuring that the modes of knowledge production conducted by citizen scientists are transparent and robust. Contemporary models of citizen science represent a potential recovery from the deep divisions of labour, space, and knowledge between scientists, their journals, their laboratories, and the public sphere.

Supplementary File

The Supplementary file for this article can be found as follows:

Supplemental File 1

Acknowledgements

We thank the Preste technical team for their active engagement, the journal’s reviewers for their constructive comments, and Esmé O’Keeffe for proofreading. We also thank Alessia Smaniotto for her insights and support for developing the COESO project.

Funding Information

This research was developed under the EU H2020 grant agreement No.101006325.

Competing Interests

The authors have no competing interests to declare.

Author Contributions

Both authors contributed equally to the research. First author assumed the leading and writing role for this article while the second author contributed to providing feedback and proof-reading.

DOI: https://doi.org/10.5334/cstp.650 | Journal eISSN: 2057-4991
Language: English
Submitted on: Jun 20, 2023
Accepted on: Apr 24, 2024
Published on: Jul 3, 2024
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2024 Jessica Pidoux, Dominique Boullier, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.