search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Analysis and news


Paying it forward – publishing your research reproducibly It might be fair to say we have entered the era of irreproducible science, write Martijn Roelandse and Anita Bandrowski


In a recent study, it was estimated that 50 per cent of the US preclinical research spend was not reproducible. That is a total of 28 billion USD or the equivalent of 600,000 annual postdoc salaries! On top of that, the success rates for new development projects in Phase II trials have fallen from 28 per cent to 18 per cent in recent years. Francis Collins, director of the National


Institute of Health, stated: ‘A growing chorus of concern, from scientists and laypeople, contends that the complex system for ensuring the reproducibility of biomedical research is failing and is in need of restructuring.’ The majority of issues around


irreproducibility are flaws in reference material associated with the unreliable identification of source materials used in the preclinical study, particularly contaminated, mishandled, or mislabeled biological reagents like antibodies or cell lines.


One of these flaws, unreliable identification of materials, was noted recently in an editorial expression of concern issued for an article in Science. The crux of that problem seemed to be that one of the authors picked a virus strain that caught the other authors by surprise; an error that was not caught. Three studies and a clinical trial attempted to replicate the findings, without success, which dealt a promising HIV cure the final blow.


Another flaw causing hundreds of


scientists to create an organisation devoted to cell line authenticity is the use of problematic cell lines. For example, more than 300 studies had used a breast adenocarcinoma cell line before it was found to be derived from human ovarian carcinoma cells. $100 million of research funding may have been spent using this misidentified cell line alone. At this stage, we will not recount here


how the more than 1,000 cell lines, in which problems were reported, continue to contaminate the cancer literature (Freedman et al, 2015). Instead, I would like to draw your attention to a simple


24 Research Information August/September 2020


act that may substantially reduce the use of problematic cell lines. A recent study by Babic et al. (2019) showed that in papers that identify cell lines through RRIDs, research resource identifiers, the use of problematic cell lines was substantially lower than in those that did not. RRIDs were introduced in 2014 and since then have caught the attention of many publishers such as Cell Press, because it is a fairly simple method for disseminating important information about reagent quality before the paper is published, saving the need to issue editorial expressions of concern or even retractions. In fact, later that year a group of editors


representing more than 30 major journals, representatives from funding agencies, and scientific leaders drew up a list of Principles and Guidelines for Reporting Preclinical Research. These identified four key areas: • Scientific rigour (or rigorous experimental design);


• Scientific premise (or strength of the key data supporting the proposed research); • Identification of key resources; and • Sex and other biological variables.


“Scientists have certainly become aware of the problems with reproducible research”


Since then several projects have been initiated, with varying success. Scientists have certainly become


aware of the problems with reproducible research, as evidenced by a survey conducted by Nature. Some journals implemented checklists that address many of the principles for reporting of preclinical research, and the effect on the top journal, Nature, has been positive with authors making explicit aspects of their methods such as whether or not they blinded any aspect of their study to reduce investigator bias. Indeed, last year,


a group of publishers have even taken the important step of creating a multi- publisher checklist so wherever authors decide to publish, the standard would be the same (See MDAR project). In the meantime, more and more


manuscripts are being submitted per year and the pressure on journals and reviewers, to assess the quality of the work is increasing. So what can possibly be done to maintain or improve the quality of peer review? One answer is to pay for peer review.


There certainly are scientists that would like to make a little extra money on the side, and well-resourced journals such as eLife routinely use professional review as part of their process. This eliminates poor quality and inconsistent manuscripts from being sent to traditional peer reviewers, streamlining the process. Another answer is to do away with


peer review altogether. Indeed, preprint servers are being used more as sources of information, and these manuscripts are now frequently cited in pre-peer review work as well as peer-reviewed work. Perhaps a more interesting solution, at


least from the technology perspective, is to offload part of the peer review process onto machines. A recent survey on twitter by Helen King resulted in a plethora of tools that in some way or form support the publication process. Nearly 50 percent of those play a role in the submission process, performing technical checks, editorial support, metadata extraction or language polishing. A portion of the technical checks is around reproducibility where Barzooka, OddPub, JetFighter, limitations-finder, seek&blastn, Ripeta, and SciScore may lend support.


So what are these tools and what can they tell us about the manuscript? JetFighter checks for colour-blindness compliance of images; Barzooka finds bar graphs and attempts to figure out if the authors are using these for continuous variables (a bad way to represent continuous data); OddPub checks for statements about open code and open data; Limitations-finder pulls out authors


@researchinfo | www.researchinformation.info


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38