LABORATORY INFORMATICS
An artificial future
ELSEVIER’S JABE WILSON PREDICTS RADICAL CHANGES IN THE WAYS AI WILL BE USED IN SCHOLARLY COMMUNICATIONS
research to address many of the world’s most challenging problems in health, drug development, engineering and science more generally. Since working at Elsevier I’ve been
Tell us a little about your background and career… My background is primarily in
artificial intelligence (AI), which I have been working with for almost 30 years. One of my first AI programming experiences was in 1990 when I was a student, studying for a batchelor of arts in AI at the School of Cognitive and Computing Sciences at Sussex University. I coded simple artificial neural networks in the Poplog environment, and expert systems and natural language processing parsers in the Prolog language. In the mid-1990s I moved into the application of cognitive science insights in the design of software user-interfaces, something which is now called user experience (UX). I began my career working on early
websites as well as interactive touch screen and CD-ROM systems for clients such as Europcar, Disneyland Paris and Sheraton Hotels. I have been lucky enough to work in start-ups launched during the earliest days of the web and teach interaction design at the Royal College of Art. I left the start-up scene and joined Elsevier in 1999, where I have participated in the transformation of an established scientific print publisher, through to its transition into a predominantly digital information analytics company. I was excited to join Elsevier as the challenges addressed by its work continually inspire me and show how technology can make a difference in the wider world. It has combined my interests in AI and software interaction design, along with meeting challenging customer demands requiring knowledge management approaches, something now known as semantic technologies. This work focuses on analytics solutions that support
12 Scientific Computing World December 2017/January 2018
involved in a wide range of innovative projects. Key milestones include: Elsevier’s first electronic reference books; early collaborative social networks in biomedical science with
BioMedNet.com; indexing for adverse drug reactions with the development of Pharmapendium; semantic knowledge bases for therapeutic drug design in cardiac arrhythmia, introduction of adaptive learning courses into the National Health Service in the UK with Elsevier Clinical Skills; and, developing the semantic search product Target Insights which has since become Elsevier Text Mining.
You’ve been with Elsevier for nearly 20 years. What is the biggest change you have witnessed in the scholarly communications industry during that time? Perhaps the biggest change in scholarly publishing is the one we now take for granted – the move to digital. The growth in the value of indexing, alongside the various forms of analytics, has dramatically changed the industry for the better. This development is especially important in the modern world, due to the sheer volume of data available to today’s scientists. The move to digital, when supported with the right tools, has made it far easier for researchers to find answers and insights into increasingly complex problems, filtering out unnecessary data. The second key change is the growth
of social collaboration in science and the ability to share data sets and navigate communities of practice – at Elsevier, this drive is supported by Mendeley. These days, there are elements of science that
“Perhaps the biggest change in scholarly publishing is the one we now take for granted – the move to digital”
rely solely on collaboration – no one group has all the data and resources to solve multifaceted problems. Recent research from Elsevier supported this fact, finding 59 per cent of chemists argued the ability to collaborate with researchers in other fields and geographies will be fundamental to scientific research moving forward.
What are the biggest recent changes in the use of data analytics in the industry? Following the transformation of scholarly communication to digital platforms, the move to create semantic data that captures knowledge has increased significantly. Another way to describe this is as a shift in focus from articles as a whole, towards individual ‘facts’ reported in publications. This has been driven by the maturing and increased productivity of automated approaches to identifying and extracting these facts; as well as the steps to bring AI to fruition in the form of machine learning. Over recent years I have seen the shift from human curation, to rules based automated indexing approaches, through to the applications of statistical approaches such as deep learning and machine reasoning. Semantic data means we are now
able to link facts that are related across papers, and over different domains of knowledge, to deliver insights that might not be obvious from one paper alone. To do so requires normalising the terminology with taxonomies, to allow a network to be created. The increasing reliance on the linked facts that these developments have enabled mean that the demands of the modern researcher are changing. At Elsevier, this has meant we have been increasingly focused on taking the expertise we have in developing semantic data bases and analytical products, and bringing this to bear on delivering bespoke solutions for customers with specific needs. The next phase that we are working on is combining semantic technology methods with various machine learning and machine reasoning approaches to create new insights.
Could you tell us about any recent data projects you have been involved in? I work in the professional services team developing specific solutions for
@scwmagazine |
www.scientific-computing.com
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28