Sponsor’s viewpoint
Open research: what’s missing?
Open science – or open research, as we prefer at Digital Science – is now a familiar concept for which many have strived for years, writes Isabel Thompson
Open research is a compelling proposition, yet we don’t have it. Why? Challenges include: implementing infrastructure, aligning business models, and evolving support mechanisms. But a more fundamental barrier is the need to change research culture to align with an open approach. To achieve this, we can look to open evaluation to support broader open movement. In the quest for open research, we can distinguish between concrete components and systemic issues. Advocates of open research have predominantly focused on developing open components in a broader system. Open access and open data have been at the forefront, as the most visible, formalised and consumable aspects of the research system. They are accompanied by open standards, open protocols and open methods, which drive science forward through improved transparency and reproducibility. The remaining challenges for open research
are harder. Beyond individual components are the messy, system-wide challenges. It’s not surprising that the culture challenge remains: culture change is hard. It requires retraining and retooling, from the boots-on-the-ground PhD workforce through to funders and government. In the pursuit of an ‘open culture’, a key element that deserves more attention is open evaluation. It supports an open culture through increased trust and transparent interaction. It is also important for research more broadly: to trust research outcomes, stakeholders of the research system must trust the underlying process. They can only do so if they trust how research is evaluated. Many processes, from journal acceptance to career promotions, are insufficiently transparent. Open evaluation would benefit producers and consumers of research, by opening up fundamental mechanisms of research progress. We define open evaluation, intentionally
naively, as follows: a fully transparent and reproducible process in which the measures are well defined and shared in advance of any evaluation. Setting up such an environment, where all participants understand the ‘rules
38 Challenges in the Scholarly Publishing Cycle 2020/2021
of the game’, should result in a fairer system where everyone is incentivised in an open manner. However, there are multiple challenges. Firstly, evaluation isn’t a monolithic activity. As humans, we evaluate all the time, mostly without thinking. Our biases and preferences are not clear, even to ourselves. However, open evaluation requires up-front specification. In practice, open evaluation must be inherently limited in scope, since we can’t attempt to formalise all evaluative situations: national assessments, grant committees, article peer review, promotion and tenure – plus everyday instances like reading a paper or attending a colloquium. Since research is a discovery of the unknown, it’s difficult to create an ‘evaluation score sheet’ ahead of an assessment. An over-reaching specification can constrain and suffocate the innovation we are trying to foster. Secondly, while evaluation plays a critical role
in research, it’s not carried out like oversight functions in other industries. In a free-market economy, we are used to regulators holding oversight roles in areas requiring specialist and focused understanding, such as drug approval. In contrast, the research domain often acts as its own judge and jury. A research regulator needs to oversee the broadest thing that can be imagined – the creation of knowledge in all its forms. Meaningful evaluation frameworks are challenging. Thirdly, evaluation and research are not
parallel, separate activities. They are subject to Goodhart’s Law – a common concept (with various names) in fields from quantum theory to psychology. Goodhart’s Law states that, if people are aware you are measuring their actions for a particular outcome, they will modify their actions to optimise results in their favour; thus, measurement of the original target is contaminated to an unknown degree. In the evaluation of research, it is especially important that we recognise this real-world effect. What’s the solution? Perhaps we can
attempt to institute open frameworks for specific evaluation tasks, like promotion and tenure. However, for general guidelines to be
of any use, they must be sufficiently specific to be actionable but not so specific that they overly constrain. A necessarily high-level set of principles may not substantively add to existing approaches such as the Declaration on Research Assessment (Dora) and the Leiden Manifesto. We must be practical, and embrace Goodhart’s law. If measuring can change the outcome, we can drive change by constructing the right evaluation. This is a tantalising opportunity, though it requires extremely careful handling. The hope is that we can use research evaluation as a powerful tool for open behaviour – as many funders and governments already do. The UK’s Research Evaluation Framework (REF) requires that material is released through open access to be eligible for consideration. Meaningful macro effects can be achieved through careful, conscious decisions to explore open evaluation practices that are practical and limited in scope. Looking forward, reproducibility could be targeted not only at research, but at larger types of evaluative event – peer review, promotion and tenure, and national evaluation. Ensuring the data are collected, stored, licensed and shared with the broader community could lead to positive cultural change. Where does this leave us? After making
progress with many components of openness, it’s time to address the systemic issues, such as cultural change. To support this, open evaluation deserves more focus: it’s an important part of the open ecosystem, as it enables trust in the research process. Moreover, we recognise that evaluation affects behaviour. Therefore, on our journey towards open evaluation, we should build in explicit support for other facets of openness to assist the open movement more broadly. More open behaviours should feed into a richer open research culture, creating a virtuous cycle that results in our ultimate goal of open research.l
Isabel Thompson is head of data platform at Digital Science
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46