search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
LABORATORY INFORMATICS


Data standards


AS THE AMOUNT OF DATA INCREASES, ORGANISATIONS ARE LOOKING TO DATA STANDARDS TO INCREASE THE VALUE OF DATA IN THE LABORATORY, WRITES SOPHIA KTORI


The Pistoia Alliance has been pioneering initiatives that will support organisations as they


work to enable and engage Fair (findable, accessible, interoperable, reusable) principles of data management and stewardship in their organisations. The Fair data guidelines, published by Wilkinson et al in 2016, have been adopted across industries as the foundation for all organisations to maximise the value and interoperability of their data. Fair principles hinge on the digitisation of data acquisition, access, management and mining, explains Ian Harrow, a Pistoia Alliance consultant. And while the concept may seem to be intuitive, he believes that companies are still finding it hard to get to grips with Fair as an ideology for data management.


With this in mind, the Pistoia Alliance


is working to develop a toolkit that will support companies as they work to understand and implement Fair guidelines across their organisations, and not just individual departments. The toolkit, the first version of which


is due for release early in 2020 – will be hosted on a freely accessible website, and encompass selected tools, best practices, training materials, use cases and methodology for change management. ‘The aim is to provide easy-to-use, straightforward tools in a one-stop-shop environment that will help companies in their drive to change the corporate mindset and implement a Fair data infrastructure enterprise-wide,’ Harrow stated. In fact, it’s that concept of change management that is key to success, Harrow pointed out. Many of the software platforms and tools that will


12 Scientific Computing World December 2019/January 2020


help companies establish a Fair data infrastructure are already available, but the how and why of implementation may be less well understood, especially when cost is involved. ‘In effect, it’s not necessarily the


technology pieces that companies are struggling with. The stumbling block is sometimes the cultural evolution that will be needed across the organisation. Scientists think of the data that they generate in terms of ownership, but at every level, from scientist up to senior management, people have to move on from the kind of magpie mentality and view pockets of data, or data streams, as part of the larger whole, and integral to the concept of data as a corporate asset.’ The Pistoia Alliance toolkit will combine


pointers to digital/software tools and other resources, or technologies that help companies to measure how Fair their data is, and how to increase that. From a practical perspective the Pistoia initiative shows organisations how to use existing software tools as part of a complete package for making data Fair. But the toolkit also encompasses education and training tools that will help users to work out how best to implement change across the organisation.


Fair enough The overarching aim is to demonstrate how and why, not necessarily to point to or provide all of the tools that will enable them to accomplish that. Use cases will demonstrate real-world applications and benefits, Harrow said. ‘Use cases are vital, real-world pieces of


the toolkit that can show the benefits of working towards making data Fair. In most cases there will be aspects of making data “Fair enough”, rather than “perfectly Fair”, and the use cases demonstrate what this might mean, in terms of data management and handling on a practical, and ongoing basis. It’s commonly a matter of perspective. ‘The toolkit will include a number of


use cases from the different project teams, and they will really sell why making data Fair is so important. Ultimately we want to show how the application of Fair and standardised data can impact on


bottom lines – return on investment and development costs/timelines,’ he said. One of the biggest practical hurdles is what to do with data that is acquired through merger and acquisitions, when potentially massive amounts of legacy data come into an organisation, commonly in proprietary formats. Someone has to grade the relative importance of all this data, and prioritise its acquisition and husbanding, Harrow said. ‘You have to identify your low hanging


fruit to show management – who holds the purse strings – the value of making that legacy data Fair and integrating it with your existing in-house data. There has to be a balance between pragmatism and forward thinking. Where does it make sense to make your data Fair? Not just for your operation now, but potentially to match future projects and objectives,’ added Harrow. Once the ‘whys’ and ‘whats’ have been established, at which point do companies start to figure out the practical ‘how’ of making their data streams, or putting in place the infrastructure that will ensure future, as well as historical Fair? This is where it becomes important to think about data standards. ‘Importantly, organisations should use existing open source,


@scwmagazine | www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28