This page contains a Flash digital edition of a book.
FEATURE


Professional insight


Janne-Tuomas Seppänen of Peerage of Science sees change on the horizon


If I was asked to describe the current state of affairs in the peer review system, the answer would be easy: the same. Academic publishing does not go anywhere fast.


But change is afoot. When I first began to talk about new ideas for peer review, four short years ago, a common reply was: ‘you are trying to fix a problem that does not exist’. In 2015, an entire session at the ALPSP conference was devoted to the diversity of new approaches to peer review. My conversations are now about solutions rather than debates about the existence of problems.


Going to 2016, there is no longer ‘the’ peer review system to analyse, but many.


Granted, some new developments are just hyped-up promotion of minor tweaks of optional details; for example ‘transparent peer review’ in Nature Communications, or the ‘Onymous’ option in Peerage of Science. But there are more profound developments too.


One important development is the emergence of mechanisms that seek to challenge, in one way or another, whether a peer review is right, and mechanisms that put some pressure on the peer reviewer to excel, rather than merely perform a chore. For example, at eLife a non-anonymous cross-reviewer consultation round is a standard part of their process, and requires consensus decisions. Frontiers sees authors, reviewers and editor engaged in collaborative peer review, seeking consensus decisions. F1000Research takes things further, replacing the entire concept of acceptance-after-peer-review with what I would call conditional indexing: the reviewer verdicts are instantly posted online, with names, and ‘approved’ vs ‘not approved’ articles differ only in whether it’s indexed and searchable or not. These new developments have


received quite a lot of attention (and the inescapable ire of Jeffrey Beall and Kent Anderson). But it should be noted that the society-run journal Atmospheric Chemistry and Physics implemented very similar measures earlier, with less fanfare but with resounding success – it is the world’s premier publication in one of the most contested fields of science. Another important development is growing agreement that reviewer


There is no longer ‘the’ peer review system to analyse, but many


recognition is needed. I agree. But on this issue the current state of affairs – tallying the number of contributions – is, in a word, wrong.


Peer reviews are not created equal – some are brilliant pieces of science in their own right, but some are simply garbage. That’s a simple fact. If academic recognition is bestowed regardless, the result will be a worse peer review system.


John Hammersley, co- founder and CEO of Overleaf, wonders whether one individual service will ever cover the review process


Why do we


review papers? In fact, why do we review anything? In some cases the reason is obvious,


especially when safety is involved – for example


when cabin crew members cross-check the doors are sealed before every take off, the review eliminates a single point of failure. For most things, however, the reasons are less obvious, and often serve a different purpose. Leaving aside for the moment the question of how well the current review system for academic research achieves any of these things, and accepting that


14 Research Information FEBRUARY/MARCH 2016


there are far more perspectives than can be covered in a short article, the review process helps provide a way for readers to filter their content, helps authors improve the communication of their research, and helps funding bodies decide where to award research grants.


This process includes researchers who conduct the official peer reviews (whether pre- or post-publication)and supervisors and colleagues who help authors compose their research and set it into the context of other recent and historical research – as well as those who provide editorial and grammatical feedback, whether this be from


The review process helps readers to filter their content


friends or colleagues or from the editors involved in the publication of the article. These differing perspectives are (one of the reasons) why it is hard to find a single, magic-bullet solution to the challenges in scientific communication discussed in other articles.


This isn’t stopping a lot of new ideas and new technologies taking on the challenge,


including: l Content filtering and recommendation engines, such as Sparrho and Semantic Scholar;


l Open, post-publication peer review platforms, such as F1000Research, ScienceOpen and The Winnower;


l Independent peer review services, such as Peerage of Science, Rubriq and Axios Review (and related: Publons, which provides a credit mechanism for peer review); and


l Discussion platforms and networks: ResearchGate, PubPeer, Academia.edu These new technologies are helping with individual elements of the review process, but there is not yet a service that successfully covers all – and it’s not clear there could ever be one. What’s exciting is the increasing interoperability between these new technologies – for instance, how Publons allows authors to gain credit for peer reviews written on a different platforms – and how they can be used together to help meet differing needs.


@researchinfo www.researchinformation.info


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36