search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Sponsored content


at Monsanto, whose role focuses on coming up with ways to help the company get better research inferences out of genomic datasets, says that his team had previously managed its data problems in a very classical, relational way. However, he said: ‘Every question we would want to ask needed lots of real-time analysis to be run and it would take us seconds to minutes to hours to perform one round of analysis, which doesn’t scale.’ Monsanto is continually researching


what are the best possible plant varieties and what genetic traits cause them to thrive in different climatic and environmental conditions. There are genetic problems that rely on being able to treat a dataset as an ancestor family tree. Williamson discovered that these family tree datasets naturally matched a graph database and it was really easy


“The power of graph database technology is in discovering relationships between data points and understanding them at huge scale”


to write graph queries instead. Now, he says, analysis that used to take minutes or hours took seconds: ‘This was really cool, because then we could do it across everything; I could ask the same question of several million objects instead of one at a time.’ That frees Monsanto up to make some


abstractions around important genetic features, he says, such as which plant ancestors continually form the most productive cross-breeds. That means its starting to get much quicker at spotting which plant varieties are the most productive, so the firm can spend its resources researching those rather than less promising options. Another medical research company


seeing graph database technology is the EU FP7 HOMAGE consortium, which focuses on the early detection and prevention of heart failure. It has a dataset from more than 45 thousand patients, originating from 22 cohort studies, covering patient characteristics, clinical parameters like their medical history, electrocardiograms and biochemical


www.researchinformation.info | @researchinfo


measurements. All this patient data is now being connected with existing biomedical knowledge in public databases to create an analysis platform, so all the relationships – implicit and explicit – in these elements, can be explored by the team of bio researchers. That really helps, as the graph database for just one heart failure network analysis platform contains more than 130 thousand nodes and seven million relationships alone, for instance.


‘Adept at discovering unknowns at big data scale’ Integrating data and knowledge in life sciences involves the modelling of an incomplete and ever-changing model of how our bodies work and what we know about them. One of the practical hurdles for computing biology is that biologists and clinicians describe things differently depending on context, with these labels often ending up as being ambiguous. For instance, biologists in different subdomains use different terms, with the same protein being described by many different names. Meanwhile, there are at least 35 different ontologies describing heart failure and associated phenotypes, all partly overlapping, and all partly tuned to a specific, unique application. And while our knowledge on biology progresses, these models continuously need to change; for example, a large part of human DNA originally deemed junk, turns out to be an important player in the system. In biology, really everything is


connected – and those connections change depending on context, time and environmental triggers. To be able to deal with these challenges, researchers need tools like graph databases which are adept at discovering unknowns at big data scale, but which are also able to handle dynamic and constantly evolving data. It turns out that researchers are using


graphs in other research areas as well. NASA scientists use graph techniques to map and track trend information, for instance. Last but not least, were you aware


that academic research stalwart Google Scholar is also built on graph technology? Clearly, graphs are helping researchers from many differing fields cope with the emergence of big data. Could it help your team, too?


Product Spotlight


RightsLink® Delivering Value-Added Author Services


RightsLink® from Copyright Clearance Center (CCC) is a robust web-based service designed to assist publishers in the collection and management of their article processing charges (APCs) and other author or publication-related fees pre-and post-publication. With the ability to integrate seamlessly into production and editorial workflows, including the leading manuscript management system providers, CCC’s next-generation platform streamlines the entire author fee transaction for Open Access, page, color and other publication charges, author reprints, submission charges and more. Publishers are able to leverage this platform to offer their authors value-added services at any stage of the production workflow. Powered by sophisticated rules and a


robust pricing engine, RightsLink offers publishers flexible pricing and discount capabilities that facilitate customization, such as institution-specific pricing, membership discounts, geographically- or funder-aware discounts, and deposit or drawdown account support. At every step, RightsLink automates


processes to help ensure that critical research gets published as quickly as possible. With this platform, publishers can provide a best-in-class customer experience for their authors; implement agile business rules; reinforce their brand; and access important transaction data for use internally or to share with external stakeholders. Ongoing product development for RightsLink is informed by regular user groups, roundtables, thought leadership panels and market feedback from customers and partners whose mission it is to deliver high-quality author services. Copyright Clearance Center has over


35 years of experience working with rightsholders and is globally recognized as a trusted partner within the publishing and author communities. Learn how CCC can help publishers gain the flexibility, scalability and predictability needed to address the challenges of collecting author fees.


For more information www.copyright.com/ rightslinkauthorservices


Emil Eifrem is CEO of Neo Technology


February/March 2017 Research Information 23


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40