laboratory informatics ➤
‘It’s an issue whichever industry you are in, and is especially relevant in the bioinformatics space, which encompasses a broad range of experimentation and data types. You may be willing for your collaborators to have access to data on your soſtware, but what you don’t want is to have to spend weeks training individuals to use it.’ Te imperative to have easily accessible and user-friendly solutions thus becomes more critical as the breadth and extent of bioinformatics data continues to grow, and new soſtware is required to make sense of it, he continues. ‘Our customers also want to work with fewer vendors across disciplines within their R&D environment, so the pressure is increasing for vendors to develop flexible platforms that can support cross- discipline research, data sources and data formats.’ To support collaboration and manage data
breadth and depth, PerkinElmer is making a comprehensive move into the cloud. ‘We already have the capability to support streaming a wide variety of data into the cloud in real time, from NGS and imaging data, to clinical trials and high content phenotypic data. But we also appreciate that this will be a stepwise progression for some customers, and so we offer a hybrid strategy where some data processing may be carried out on premises, using tried and tested systems and platforms that our customers trust.’ When it comes to data integration, it
is not always feasible to rely on experts from each biological discipline,’ Hoefens adds. ‘You may be integrating NGS with mass spectrometry data or high content phenotypic screening data with pathology information. Our goal is to shield the user from the complexities of these research platforms, to allow them to ask the biological questions that they are trying to answer.’
Facilitating better integration And when you look at it from a bottleneck perspective it’s more data integration, rather than data processing, that is the issue, he suggests. Tis is a topic that arises time and time again in discussions on how informatics solutions can support cross-discipline research in any R&D environment. PerkinElmer’s strategy is not to attempt to provide every piece of soſtware that may be required, but to offer tools that facilitate better integration, and to establish more of an open research collaboration platform onto which customers can bolt their own preferred
20 SCIENTIFIC COMPUTING WORLD
soſtware. ‘Our own data integration and visualisation tool, TIBCO Spotfire, has been built as a generic platform that allows our customers to integrate their profiling data from whichever instrumentation they are using. However, we appreciate that customers may have their own preferred tools.’ PerkinElmer has developed soſtware tools
that can sit on top of TIBCO Spotfire or other visualisation platforms to facilitate seamless data integration and interrogation. ‘Our Genesiſter product for the analysis of microarray and NGS data, for example,
THE ULTIMATE GOAL IS TO USE UNDERLYING INFORMATION TO MAP AS MANY COMMON IDENTIFIERS AS POSSIBLE
is offered as an Analysis Edition tool for data manipulation, and as a Lab Edition, which provides laboratory informatics management system (LIMS) functionality.’ Both editions integrate with the company’s OmicsOffice suite of products for managing qPCR, microarray, NGS and functional genomics data, and all of this data analysis and exploration functionality then sits within Spotfire. Offered in parallel is Columbus, an image
analysis solution that can manage images imported from any major high content imaging instrument. ‘Columbus can extract features such a cell count, numbers of living cells, shapes of cells, as well as tissue pathology,’ Hoefens comments. ‘Spotfire’s high content profiler module then allows
multivariate analysis so that users can identify features of relevance, based on potentially thousands of parameters. And then, sitting next to all this is PerkinElmer Signals, our cloud-based big data platform, and in particular Signals for Translational, which pulls clinical trial, patient, and adverse event data into the same environment as the NGS, microarray and imaging data.’
Faster drug development timelines Te NGS community has for some time exploited computational clusters and high performance computing (HPC) to handle the size and complexity of data, but the benefits of HPC in the imaging field are only starting to be realised, Hoefens continues. ‘Tere are some very sophisticated algorithms now being applied to imaging data, and we are working with customers to migrate image analysis infrastructure onto HPC platforms that could dramatically reduce compute time, and in real terms shave possibly months off drug development timelines.’ Interestingly, he suggests that while the
life sciences sector has in the past pioneered and driven innovation in soſtware and informatics, the pendulum has now swung and there is significant innovation outside of life sciences. ‘We would do well to tap into the experiences of other sectors, particularly with respect to integrating, analysing and interrogating vast volumes of diversely structured data. And what we must realise is that scientific knowledge and experimental capabilities are also expanding. Vendors such as PerkinElmer need to develop flexible products that will evolve with the R&D landscape.’l
@scwmagazine l
www.scientific-computing.com
istock
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36