HPC 2014-15 | Markets
Raijin, the 57,500 core Fujitsu Primergy cluster at NCI
l Supporting the SKA project Te Square Kilometre Array (SKA), one of the largest scientific endeavours in history, is a $2.3 billion international project to build a next-generation radio telescope in South Africa and Australia that will help scientists answer fundamental questions about the origins of the universe, such as how the first stars and galaxies were formed. It will be 50 times more sensitive and able to survey 10,000 times faster than today’s most advanced telescopes, and is expected to generate one exabyte of data per day. Te Pawsey Supercomputing Centre is one
of the 20 members of the SKA Science Data Processing Consortium, responsible for designing the hardware and soſtware to analyse, process and visualise the data produced by the SKA. Te data from the Australian component of the SKA is under consideration to be processed and stored primarily at the Pawsey Centre. Te data produced will be too large to store for any reasonable period of time, and so the data must be managed in real-time, necessitating immense processing power (see Big Data needs networks, page 14). Two precursor projects to the SKA, the
Australian Square Kilometre Array Pathfinder (ASKAP) and the Murchison Wide-field Array (MWA) were launched in late 2012 and serve as important technological demonstrators. Te MWA telescope has been in full operation since 2013, and data from the first antennae of the ASKAP project are already being processed by
12
the real-time Cray supercomputer at the Pawsey Supercomputing Centre.
l Managing the diversity of HPC requirements Te workloads of Australia’s petascale systems are perhaps atypical of facilities elsewhere, in that they need to marry computational capability with data- intensive capacity, and handle the requirements of merit-based access for university researchers with the R&D and service requirements of national agencies. Te relative lack of tier 2 systems in Australia imposes workloads on the tier 1 systems
“Tere also are significant legacy workloads – applications no longer of supercomputer class, and accordingly, there is a need to migrate lowly-scaling tasks onto a more suitable platform”
that otherwise would be handled in alternative ways. Accordingly, the tier 1 facilities serve the gamut of research — pure, strategic, applied, and industry – providing the platform from which Australian researchers maintain international competitiveness, and the science agencies undertake research that delivers national benefits. Tere also are significant legacy workloads
– applications no longer of supercomputer class, and accordingly, there is a need to migrate lowly- scaling tasks onto a more suitable platform, in order to increase the effectiveness of the use of the system, and provide greater opportunities for researchers through access to more advanced tools and methods, better suited to a supercomputer. From a system point of view, the cloud now provides a significant opportunity. It is also possible that ultimately the cloud may present a threat, but for as long as tier 1 HPC facilities are valued and serving as crucial platforms for research collaboration, the threat is some distance away.
l Addressing the skills shortage Te value of national investments in HPC depends as much on soſt infrastructure (skills, soſtware capability etc.) as it does on hardware. At this time, none of the current tier 1 facilities in Australia are equipped with accelerator technology, reflecting the primary usage drivers, but also a skills gap and insufficient capacity with which to migrate the user base towards methods that exploit accelerators. Te next generation of procurements must inevitably include an accelerator component if performance gains are to be realised at reasonable cost for both procurement and operations. It will be a ‘goldilocks’ decision, however, since too small a fraction will see Australia’s competitive position weakened, while too great a fraction may be a waste if the user base is unable to take advantage
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40