This page contains a Flash digital edition of a book.
DATA INTEGRATION | LABORATORY INFORMATICS GUIDE 2013


or systems, but often includes many more. It is essential to know who all of these parties are and understand their goals and individual wins.


l What is each party’s contribution? Each party in the integration has a role to play and these roles can range from being aware of data changes for reporting to participating in the system. Identify what each party is expecting from the process, define project ownership and clearly articulate an escalation plan in the event of difficulties.


l Avoid customisation or programming at the interface level between systems. Most data needs to be changed or reformatted prior to, or just after, transferring it. Spend a significant amount of time when debating this subject with vendors at RFI or implementation level.


l Define internal and external communication strategies to ensure that expectations are set. When data integration projects are failing, it is often due to a lack of clear understanding of the final goal. Don’t making assumptions – ask for details in the process.


STANDARDS – FIRST THINGS FIRST Data integration in laboratories is not straightforward. It may seem a boring topic these days, but the need for standardisation in our industry, has never been higher. Without these standards, automating data capture from instruments or data systems can be challenging. Several initiatives are working hard to address these badly-needed common standards. The Pistoia Alliance aims to lower barriers to innovation by improving the interoperability of R&D business processes through precompetitive collaboration. The alliance was conceived by informatics experts at AstraZeneca, GSK, Novartis and Pfizer who were all attending a meeting in Pistoia, Italy. Pistoia’s founders realised that their


organisations were all tackling the same precompetitive problems – issues around aggregating, accessing and sharing data that are essential to innovation, but provide little competitive advantage. They realised that working together to solve these common problems would free their organisations to innovate by enabling them to cut costs and repurpose precious resources to projects with more strategic, competitive impact. In June, an industry-sponsored initiative


to promote open information standards for the analytical laboratory was formed. The


Allotrope Foundation, sponsored by Abbott, Amgen, Baxter, BI, BMS, Merck, GSK and others, is addressing the lack of common metadata repository formats. The proposed framework will consist of (a) open document standards based upon XML and JSON, (b) open metadata repositories to provide accurate input from numerous data sources, and (c) open source class libraries to support these components. A standard is only a standard if


organisations adopt it. The AnIML standard supports a full audit trail capability, digital signatures and validation for regulatory compliance and is gaining acceptance. AnIML is a standardised data format that allows for the storing and sharing of experiment data. It is suitable for a wide range of analytical measurement techniques. AnIML documents can capture laboratory workflows and results, no matter the instruments or techniques used. AnIML is based on XML, which has two consequences: first, many tools for XML manipulation are readily available off-the-shelf, making implementation easier. Second, as XML is a text-based format, AnIML documents are human-readable – an important aspect for long-term storage. AnIML is being developed by the ASTM E13.15 subcommittee on analytical data, comprising volunteers from industrial, academic, government and vendor communities. Other standards which have a significant


impact on how data integration can be successfully implemented include ISA, ASTM and IEEE. These are multi-industry, globally accepted standards. For example, the ISA standard consists of models and terminology for structuring the production process and for developing the control of equipment (ISA-88) and for production, maintenance and quality (ISA-95). Centocor Ortho Biotech used S88 principles to develop a system-independent, recipe-based ELN/LES system based upon Accelrys’ ELN technologies. Using S88 resulted in structural processes and provided a zero-day release and zero-day transfer to and from external contract labs and between internal groups including the laboratory, pilot plant scale-up and production facilities.


CONTEXT IS KING Overall, there are three basic operating principles to optimise data integration.


cloud, service-oriented- services, and mobile


devices, becomes more prevalent’


as the adoption of new mainstream technologies like


‘Change is on the horizon, however,


First of all, capture the data at the point of origin to eliminate human error and reduce system complexity. Smartlab from VelQuest, LIMSLink from Labtronics, and LabWare all began with instrument data integration in mind. Their original products were designed to capture laboratory data at the data source. Secondly, simplify and implement self-documenting processes to eliminate transcription errors and avoid unnecessary retyping of data. In a recent survey, 32 per cent of people voted that data integration in a paperless laboratory will eliminate manual entries and data transfer. Finally, ensure that


metadata is captured in a structured way. Raw data represents a set of unorganised and unprocessed facts (e.g. collection of numbers) and is usually static in nature. A data file without context or metadata


information is meaningless. The scientist is no longer in the laboratory, but integrated in the overall quality process. To ensure tacit knowledge is maintained in computerised systems, context information must form the foundation for integrated scientific analysis and interpretation. Context is the organisation of related elements that makes analysis and


interpretation possible: l Data type context – enables specific types of data analyses.


l Batch context – enables batch-to-batch comparisons.


l Process context – enables process-to- process comparisons.


l Site context – enables site-to-site comparisons.


l Genealogy context – enables upstream/ downstream correlations. Thermo Scientific Integration Manager for


the Paperless Lab provides bridges between the islands of data generated in the lab and transforms that data into information that can be used across the enterprise. It provides access to all instrument data via a single interface and enables the real-time investigation of results. The technology converts raw data to an XML storage format to ensure future-proof data archiving and to facilitate data and information sharing across the organisation, without having to rely on access to the original application. PerkinElmer offers an iLAB solution,


which is part of its Ensemble for QA/QC www.scientific-computing.com/lig2013


➤ | 11


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40