This page contains a Flash digital edition of a book.
occasional supercomputing

security at these companies. ‘Cloud services should be an extension of a company’s own IT environment,’ he said.

On-demand Gridcore offers an on-demand service that is based around middleware and deployed on customers’ premises. Tis is matched to any existing high-performance computing environment, making it easy for users to adapt. Private clusters are available for use from anything between one day and three years and should a company require increased computing power it can choose to add more nodes. Gridcore’s remote visualisation soſtware allows the aggregation of users from any part of the world so that organisations can engage globally and as a result of this aggregation, companies can enforce a



policy of ensuring all data is located in Gompute. Because everything is manipulated remotely, and there is no transfer of a complete set of data, security risks become minimised. Jerry Dixon, business development manager

at OCF, believes that an HPC-on-demand service is the way forward for many companies. ‘A high-performance server cluster accessible via the Internet enables research teams to access far larger compute power than they could afford themselves. Researchers need only need pay for

Case study

A rising number of academic institutions and supercomputing centre are offering on-demand services. Accelerator is one such example

Offering 200,000 cores of raw compute power, the expanded Accelerator service is being promoted as the most comprehensive and flexible HPC on-demand resource in Europe. Provided by EPCC, the world-class supercomputing centre at the University of Edinburgh in the UK, Accelerator has been created by combining access to HECToR, the UK’s national computing service, with two new systems: an IBM BlueGene/Q, and Indy, a dual configuration Linux-Windows HPC cluster. Accelerator is aimed at scientists and engineers


solving complex simulation and modelling problems in fields such as bioinformatics, computational biology, computational chemistry, computational fluid dynamics, finite element analysis, life sciences, and earth sciences. It has the capability to support a wide range of modelling and simulation scenarios, from the simple modelling of subsystem parts through to the modelling of entire systems. Combined with the ability to run multiple simulations in parallel, Accelerator can dramatically reduce discovery and innovation timescales. Operating on a pay-per-use model, users gain access to resources via an Ethernet connection enabling them to have supercomputing capability at their desktop. Unlike cloud-based services, no virtualisation techniques are deployed and the

service is administered directly by users through a range of administration and reporting functions. The service is fully supported with an integral help desk and EPCC support staff are available to help with usage problems such as compiling codes and running jobs. Addressing the concern of security, Accelerator’s hardware, which includes petabyte- scale data storage, is housed in EPCC’s Advanced Computing Facility, which was purpose-built to provide what EPCC says are the highest levels of physical and online security. Service access is governed by tightly controlled security and access procedures, with users managing all their own data, code and resource usage through unique, secure accounts.

@scwmagazine l

the compute power that they actually use and they can use the service with confidence that all the equipment is fully up to date. And concerns over energy, cooling and space are transferred to the supplier,’ he said. Regardless of whether the

Further information

Accelerator demand-computing

decision is made to invest in deskside supercomputing or in on-demand services like the cloud, there are companies whose services are designed to ensure optimal use of resources. Univa’s Grid Engine soſtware is a distributed resource management, or batch-queuing system that orchestrates users’ workloads to available compute, reducing idle cycles. Grid Engine was initially developed and supported by Sun Microsystems as an open source solution and as a consequence it was easily attainable and had a thriving community discussing how to tune, optimise and integrate it. Fritz Ferstl, CTO at Univa, explained that because of the soſtware’s history and because the development was sponsored by Sun, it was a production mature technology from the start rather than an emergent solution that can oſten start out as an incredibly shaky piece of soſtware. Grid Engine evolved further over time and many features were upgraded making it a cost-effective option for many users. When Oracle acquired Sun the open

product development ended and eventually the

Convey Computer






development team and the evolution of Grid Engine moved over to long-term partner Univa. ‘Essentially, the soſtware turns the entire computing infrastructure into a black box,’ said Ferstl. ‘Once users have integrated their applications and workflow, any hardware that’s put behind it – whether it is then upgraded, expanded, re-configured – becomes completely transparent to the end user at the front. Tey simply submit their jobs as usual and, depending on what is done in the back end of the system, jobs might run better and faster, but the key point is that users don’t need to concern themselves with what’s behind it.’ Should a company choose to completely change its

infrastructure and move the computing over to cloud, or even if it were to replace the hardware, the same soſtware layers could still be used. ‘Companies are free to deploy new solutions or applications, or take advantage of developing accelerator card technology (e.g. GPUs) without the need to retrain users or administrators. For the end user, very little would change.’ Te danger, however, is that soſtware like

this is mission critical because it sits between the hardware and applications, and enables the workflow. Ferstl explained that if the soſtware were to fail, the hardware would become useless as no jobs would be running. ‘Much like a conveyer belt in a factory, if the soſtware stops, everything stops.’

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32