This page contains a Flash digital edition of a book.


Cloud and browser – the future of academia?

Will we see a future where all scientific analysis becomes browser-based? Mark Hahnel offers his thoughts

and disseminate their research outputs, the cloud is becoming more and more attractive as a place where all products of academia should live. New directives on both sides of the Atlantic mean that we are on the brink of an avalanche of academic data becoming available online in an open manner. The potential for the progress of academia in general is huge, but with this comes a need for solutions and new technology built on the backbone of the cloud and the browser.


The cloud, with its fast load times, scalability, automated application deployment, multiple back-ups and constantly updated hardware, means that institutions need not create their own server centres with associated running costs and rapid dating of technology. We are moving beyond the merely ‘making academic data openly available’ phase, to one where we can derive new insight from larger data sources. At this stage, the ability of any academic developer to access the processing power of thousands of servers at the click of a button also demonstrates the inherent power of scale that commercial cloud services can provide. As large numbers of research outputs are being made openly available with appropriate metadata, is linked open data closer to becoming a reality across academia? It is generally l

s both research funders and governments mandate that researchers and institutions store, curate

accepted that automation of data collection in a machine-processable way across academia globally is the most efficient way to move research forward. Having multiple siloed instances within research institutions hinders the ease with which data from different projects can be pulled together semantically.

By making use of service providers such as Amazon Web Services (AWS) and Microsoft Azure, these files need only be stored in one place at one persistent URL. If this is the case, then a future where all scientific analysis becomes browser-based seems like the next logical step. With much of the research that

has already been made available, the limiting factor isn’t the storage space

Berkeley and Harvard already are, they will sign on at some enterprise level.

An ideal view would see specific web-based apps being developed and applied over these large, open data sets, or even multiple data sets being pulled in via APIs from different persistent locations. This would lead to huge savings in redundant copies of large outputs being stored in closed walled academic institutes around the globe.

The potential that linked open data

has to revolutionise the efficiency of drug discovery and academic progress in general, cannot be underestimated. The real remaining question is whose responsibility is it to build these browser tools and apps? Some 30 years after the web was

created for disseminating academic content, organised groups of researchers are making huge inroads into exploiting web-based technology



itself but the bandwidth constrictions of the consumers. Microsoft Research is actively going after the academic space, focusing on research groups that need computational power and storage options that Azure can offer. AWS already hosts publicly available datasets including genomics data at no cost to the institution.

By engaging with the institutions at this level, it can be assumed that these companies are hoping to build strong relationships so that when research groups start building web-based analytical applications, as


and integrating it into existing workflows. Examples such as SciPy and R take all of the best elements of the open source community and apply it to academic research. As researchers begin to see the benefits and efficiencies of utilising such technologies, the momentum should empower a new age of academic efficiency. It has been a long time coming.

Mark Hahnel is founder of Figshare His full article can be found at

For regular news updates, please visit

Latest on

European Institute of Oncology deploys Thermo Scientific LIMS Based in Milan, Italy, the European Institute of Oncology is using Thermo Scientific laboratory information management system (LIMS) to process more than 4,000 biospecimens annually, including liquids, solids, and DNA and RNA.

Titian continues expansion in the Far East Titian Software has granted distribution rights to LBD Life Sciences, who will represent Titian’s Mosaic sample management software in China and provide a more accessible support service for Chinese customers.

Sanofi deploys Elsevier literature management tools Sanofi, a global pharmaceutical organisation, has implemented Elsevier’s QUOSA literature management tools to automate adverse event monitoring. QUOSA powers the retrieval, storage, tagging and annotation of relevant case reports, allowing for the creation of a centralised repository of product-related scientific literature.

Allotrope Foundation and Osthus to develop open framework Allotrope Foundation has partnered with Osthus to develop its open framework for laboratory data. Following the deal, Osthus will provide consulting services to Allotrope, as well as the design, coding and implementation of the framework.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52