LABORATORY INFORMATICS SPONSORED CONTENT
Lack of FAIR data reduces life science innovation
Siloed research data can limit life science organisations’ impact, which negatively affects scientific innovation, writes Robert Roe
system makes it difficult to access records for annual stock reviews or audits. During instances of data intervention by either a regulatory authority or a laboratory manager, additional time is then spent on re-capturing old data records into more preferred formats. Frequent episodes of data cleansing can result in substantial motion waste, as scientists pause research projects to instead get inventory records in order. As new and emerging technologies
As the volume of data and the number of use cases in life sciences continue to grow, there is growing concern that a lack of reusability will impede innovation. To solve this issue companies are
exploring FAIR (Findable, Accessible, Interoperable and Reusable) data principles to enable them to make better use of data stored in an organisation. In May 2018, the EU published a
report estimating that not having FAIR research data costs the European economy at least €10.2bn every year. In addition, the report also draws a rough parallel with the European open data economy. It concluded that the downstream inefficiencies arising from not implementing FAIR data could account for a further €16bn in losses annually. Similarly, in the US, according to recent Gartner research, the average financial impact of poor data quality on organisations is significant. The cumulative effect of unproductive time spent by every scientist across multiple departments or sites can negatively impact business outcomes. This could mean losing competitive advantage in the pharmaceutical industry, due to extended time-to-market. Similarly, in contract research organisations (CROs), motion waste can gradually diminish business potential with slower project turnarounds limiting the number of clients served.
This time-sink caused by everyday motion waste directly impacts data findability. Plus, the lack of a reliable
30 Scientific Computing World Spring 2021
come into play, and secondary use cases for existing research data continue to grow, it is becoming increasingly important for data to be accessible, rather than siloed, within specific departments or disparate data systems. An example of this is AI and machine learning, as these approaches become more popular and more widely adopted. These techniques require researchers
to take large sets of historic data and apply them to solve more problems and ask new questions – again leading to
“The cumulative effects of unproductive time spent by every scientist across multiple departments or sites can negatively impact business outcomes”
new data uses and requiring fresh data management techniques that can support these large data sets. This means that organisations have the
potential to benefit greatly from a shift in the way they manage data and data sharing. Rather than individual scientists’ data only being used by a specific person or team for one purpose, today data can be used by the entire company and even the wider industry, to advance innovation. However, the industry is still playing catch-up to make this data sharing a reality. Covid-19 has highlighted the
urgency of addressing these problems and has provided a wake-up call for many organisations. To ensure that
organisations can respond quickly to such events in the future, scientists need to be able to access the right data in a functional form as quickly as possible. This data may need to be shared with other life science and biotech companies, and potentially integrate with large-scale real-world evidence (for example, data from self-reporting mobile apps like ZOE in the UK). FAIR data is essential to bring global solutions to such a huge public health crisis and others that may follow.
AI requires new ways of managing data In recent years, the life sciences industry has suffered an unignorable decline in innovation efficiency, but AI has the power to change this. Drug developers are looking to bring together everything that is known about a problem, to build a more accurate and nuanced picture of patients, diseases and medicines. As such, we need new ways to capture and manage these varied data. FAIRification is one such way. Employing data to build more realistic,
multi-dimensional analyses will help researchers better understand diseases and assess how chemical entities behave in biological systems. Data that are structured in line with FAIR principles, and so are interoperable and reusable, will make this approach possible. It’s also an approach that promises to slash drug development times and vastly reduce late-stage failures. To build such in-depth patient and
product profiles, life science companies need access to greater volumes of data external to their organisation, including: public domain sources (such as PubMed,
ClinicalTrials.gov, FDA); commercial intelligence (such as Sitetrove, Pharmaprojects, Pharmapremia); data provided by CROs; and real-world evidence (such as electronic health records (EHR) and patient self-reporting). Since 2016, the FAIR Data Principles
have been adopted by the European Union (EU), together with a growing number of pharmaceutical companies, research organisations and universities. To accelerate innovation and
productivity, more organisations and public bodies will need to follow in their footsteps.
@scwmagazine |
www.scientific-computing.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42