Feature
g
about the value of AI and sees more and more publishers turning to such tools. Publons is currently working with the Swiss National Science Foundation on using AI to assess peer review quality. And the organisation’s Publons Reviewer Connect also uses AI to cross-reference its peer review platform with Web of Science to recommend reviewers to editors. As Barros said: ‘We have found that the biggest reason a reviewer rejects the opportunity to review is when the subject area is not relevant, but this really helps with matching peer reviewer to a manuscript.’ Mugridge concurred and pointed out
how Frontiers’ AIRA will scan manuscripts, identify key words and then scan databases that use information from the likes of Google Scholar and Scopus, to identify potential peer reviewers. Rates of reviewer decline have been low and a recent survey of Frontiers’ editors indicated that a mighty 87 per cent thought AIRA was useful and enables staff to make decisions more effectively. Looking forward, Frontiers now intends
to use AIRA to spot research trends. As Mugridge highlighted: ‘Using impact and altmetric data, as well as citations, AIRA can identify what makes a topic or an author trending in a research field... we’re working on this and it’s going to offer interesting insight. ‘We were born digital and open access, and as time has gone on we have seen that machine learning is going to play a huge role in publishing,’ she added. ‘We are using AI to improve our publications so they can stand the test of time.’ Yet, not all publishers are adopting AI right now, and UK-based Cambridge University Press, for one, has a different take on the topic. As Fiona Hutton, head of STM open access publishing and executive publisher, put it: ‘It’s not something that we have done... there is this strong idea in the press that editors and researchers in the field are the best people to approach for peer reviewers.’ ‘But, I think it is an avenue to explore
going forward – anything that can help to facilitate [the peer review process ] in a more intuitive and intelligent way will help the whole scholarly communication process,’ she adds.
8 Research Information February/March 2020
And for Hutton, this is what it’s all about.
As a former cancer researcher, she is only too aware that research isn’t, as she put it, ‘linear’. So with this in mind, she recently launched a new open access journal, Experimental Results.
The journal is publishing stand-alone
experimental results viewed as inconclusive or negative as well as attempts to reproduce previously published experiments, including those that dispute past findings. With this, Hutton intends to address the thorny issue of research reproducibility and also cut peer review times and ease reviewer fatigue. ‘Some research has an obvious narrative,
but research can also be messy and confusing, and some scientists actually have to create that narrative to justify their research,’ she said. ‘I wanted to produce something that is much more mirrored to what the research scientist does, and wanted to show the complexities and challenges that actually happen in research.’ ‘Also, during peer review, the article
is looked at, yet no-one repeats the experiment,’ she added. ‘This journal is providing a quick way for researchers in different laboratories to replicate experiments and publish whether or not they get the same results.’
“Not all publishers are adopting AI right now”
But, not only is the journal very different,
so is its peer review process. Given the output of the journal is small snippets of research, Hutton and colleagues developed so-called scorecards so peer reviewers could focus on, say, whether an experiment has been carried out correctly or if a piece of research answers a valid research question.
With this in mind, the scorecards
comprise basic elements to help the reviewer to decide if the research is acceptable for publication. These are then followed by weighted options to provide a score. And in line with transparent review, each peer reviewer is identified by name with each review published alongside its article with a DOI.
Fiona Hutton
‘When a scorecard is published, the
reviewer will get credit, and with the DOI, he or she can collect that information as part of their academic record, and so collect the value of their time,’ said Hutton. The journal launched in September last
year, and articles are currently moving through the publication process. Hutton says she and colleagues have ambitions to make the time from author submission to publication ‘very quick’, and feedback to date has been positive. ‘We’ve have really wide engagement
from life sciences to engineering to physics and astronomy, and our community is commenting on the real need to have a publication that can publish research that hasn’t yet seen the light of day,’ she said. ‘This approach is hitting true for a lot of subject areas, and in the beginning, we didn’t know if this would actually be the case.’ Hutton is confident that Experimental
Results will prevent scientists from repeating experiments, wasting money, and save them precious time and effort. And in line with sentiments from Barros of Publons and Frontiers’ Mugridge, she also believes the publication will save peer reviewers precious time and deliver much-needed recognition. ‘Peer review is a huge part of
researchers’ academic time and they are contributing to ensuring that their part of a research field in adequately peer reviewed. They really need to be credited for doing this as part of their normal career progression. ‘We’re finding that reviewers like [our
system] as they are receiving credit for what they are putting in... and making sure researchers receive this is becoming very, very important,’ she concludes.
@researchinfo |
www.researchinformation.info
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32