LABORATORY INFORMATICS GUIDE 2014 | INTEGRATED LABORATORIES
to facilitate data-processing, data-exchange, and verification. One of the ultimate goals is to eliminate widespread inefficiencies in laboratory data management, archival, transmittal, and retrieval, and to support a start-to-finish product quality lifecycle, which would enable cross-functional collaboration between research, development, quality assurance and manufacturing. The framework will include metadata
Example of dashboard ➤
l For the scientific researcher, the ability to record data, make observations, describe procedures, include images, drawings and diagrams and collaborate with others to find new chemical compounds, biological structures, without any limitation, requires a flexible user interface.
l For the QA/QC analyst or operator, the requirements for an integrated laboratory are quite different. A simple, natural- language based platform to ensure that proper procedures are followed will be liked.
l To investigate a client’s complaint professionally, the customer care employee requires a quick and complete dashboard report to look at metrics for all cases, assignments, and progress in real- time, by task, severity, event cause, and root cause. The devil is in the detail, and that’s where the laboratory data may give significant insights.
l Legal: Instead of saying ‘we saw that a couple of years ago, but we don’t remember much about it’, sensitive information can searched and retrieved, including archives.
l During regulatory inspections ‘show me all the data during this time frame, which raw material batches were involved and show me all the details’.
HETEROGENEOUS SCIENTIFIC CHALLENGES The lack of data standards is a serious concern in the scientific community. It may seem a boring topic these days, but the need for standardisation in our industry, has never been higher. Without such standards, automating data capture from instruments
6 |
www.scientific-computing.com/lig2014
or data systems can be challenging and is expensive. Initiatives such as the Allotrope Foundation2
are working hard to address
these badly needed common standards. The Allotrope Foundation is an
international not-for-profit association of biotech and pharmaceutical companies building a common laboratory information
The framework will include metadata dictionaries, data
standards, and class libraries for managing analytical data throughout its lifespan
framework for an interoperable means of generating, storing, retrieving, transmitting, analysing and archiving laboratory data, and higher-level business objects such as study reports and regulatory submission files. The deliverables from the foundation, sponsored by industry leaders such as Pfizer, Abbott, Amgen, Baxter, BI, BMS, Merck, GSK, Genentech, Roche and others, are an extensible framework that defines a common standard for data representation
Table 2: SQL pros and cons
Why traditional hierarchical was initially abandoned
Complex architecture Slow responses Vendor bound
Inflexible and fixed data schemas Required mindset change Invasive technology
The SQL a success story
Extensible open architecture Split physical & meta data Product independent
User definable flexible ad-hoc queries capabilities Availability of faster computers and networks Single database language
dictionaries, data standards, and class libraries for managing analytical data throughout its lifespan. Existing or emerging standards will be evaluated and used as appropriate, to avoid ‘reinventing the wheel’. It is a wake-up call for the industry but one that may be muted by our risk-avoiding, sceptical mindset. It is reminiscent of how the database technologies emerged in the 1970s. The reader is challenged to identify the similarities between the development of SQL, and the initiative to create an intelligent and automated analytical laboratory.
GLASS HALF FULL OR HALF EMPTY? The deployment of computerised database systems started in the 1960s, when the use of corporate computers became mainstream. There were two popular database models in this decade: a network model called CODASYL; and a hierarchical model called IMS. In 1970, Ted Codd (IBM) published an important paper to propose the use of a relational database model. His ideas changed the way people thought about databases. In his model, the database’s schema, or logical organisation, is disconnected from physical information storage, and this became the standard principle for database systems. Several query language were developed, however the structured query language, or SQL, became the standard query language in the 1980s and was embraced by the
➤
PerkinElmer
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40