This page contains a Flash digital edition of a book.
HPC 2015-16 | Integrators Although Cray may decide to ‘integrate’


commodity components into its own products in this fashion, it provides the value-add for its customers by its own design and quality. Bolding pointed out that, for example, Cray had put a lot of effort and money into voltage regulation – more so than the commodity providers – and that this led to increased reliability across the cabinets. Similarly, it was better for Cray to design its own boards – they are not cheaper but they are more reliable, Bolding said, whereas the quality of commodity boards was not as good. ‘We have to do constant comparisons with


commodity pricing – what is the right trade- off between reliability and cost?’ he said. In this respect, what is best for the customer may not necessarily always be what a technological purist might consider the best technology. Te interplay of technology and market dynamics is a complex one: ‘Our engineers have to love the technology. My job is the business – I need a different perspective. If we can help a beautiful technology survive, we’ll do it if it is best for the customer.’ Te extended sense in which Cray can


be regarded as an integrator is apparent in a further comment by Bolding – that the judgement over which components to provide in-house, and which to buy from the commodity market, is ‘a judgement that has to be made at the system level’. Individual processors or storage components may be outstanding, but Cray, in Bolding’s view, has to deliver to its customers an integrated system ‘that can be up and running and solving their problems in hours’.


Convergence of HPC and big data in a single architecture? Virtually all of the suppliers of HPC systems, whether they be technology providers like Bull, Lenovo, IBM and Cray or integrators such as Transtec and ClusterVision, believe that the future lies in some sort of convergence between data processing and the traditional number-crunching business of HPC. In finding ways to address this challenge, a subtle divergence appears to be opening in the market, principally between IBM and most of the others. Many of the companies believe that


different architectures will have to be developed, tailored to different applications – there will be no one-size fits all. In contrast, IBM believes that its Power series of processors and the Open Power ‘ecosystem’ behind it will offer the way to do both HPC and big data processing.


32


architectures that are adapted to workloads.’ Lenovo’s Rosen has a similar view. He


believes that the future of HPC will neither be a single architecture nor a single processor, but workload-optimised rather than general- purpose systems. If HPC does evolve in this direction, it differentiates it from the hyperscale business, in his view. He sees a role not only for ARM processors, but also FPGAs and GPUs in systems that have been optimised for specific workloads. Although Rosen did not cite this example,


Volker Lutz (left) and Daniel Kollmer working on the Oculus cluster at Paderborn, another ClusterVision installation


For IBM’s Gupta, Open Power is a


‘chance to redefine the data centre’. If high-performance computing and high- performance analytics are converging, he believes that the underlying technologies can transfer, although some of the specific technologies may be different. IBM now has a single architecture – a data-centric one – and this means that in future HPC architectures, ‘you will move the compute to the data’, rather than the other way round, he said. ‘We’re not just going aſter HPC, but big data, cloud, machine learning, and the enterprise. We have an architecture that goes across all these sectors.’ Tere will be some tweaking and


optimising for different workloads, he continued, but it is clear that IBM’s vision is for the underlying Open Power architecture to run across all applications and types of work. Gupta sees both the Open Power foundation and his job, in particular, as encouraging interest among developers to write applications: ‘Once you have an architecture, you need an ecosystem. IBM understands developers.’


Different technology for different workloads? Cray’s Bolding, however, believes: ‘We will see a diversification of technologies to solve different problems. When you’re delivering a system and the customer has distinctive needs, you have to mix and match. We have to have different technologies. You’re going to see


a recent case in point might be the QPACE2 machine, build by Eurotech, that recently went live at the University of Regensburg in Germany, as reported on the Scientific Computing World website. Among other distinctive design features, this uses Xeon Phi co-processors rather than conventional CPUs to do all the computing for problems in quantum chromodynamics. In the face of an uncertain future, Cray’s


strategy is to leave its options open for as long as possible, by creating an underlying platform that is flexible enough to accommodate very different technologies and on which it can build different choices. According to Bolding, when it is appraising its technology choices,


“An integrator is not just a sales channel: the relationship is a partnership”


Cray has to look five to seven years into the future – far further ahead than a conventional integrator. ‘Because of our unique position at the high end with long delivery times, we get early access to the roadmaps from all the technology providers and we model many types of processors and other technologies so we can make choices. But we have to make decisions about how to have flexibility to leave decisions as late as possible.’ Tat’s why, he concluded, ‘we have to design a good platform.’ Tere is convergence in Crays’ strategy – it


is bringing its current-generation XC and CS lines into a single platform, called Shasta. But the emphasis is on its adaptability to enable different configurations, and thus different options for price and performance, to meet the differing requirements of commercial, government and academic organisations. Shasta’s architecture is being designed with the flexibility to accommodate a variety of processors: networking; memory; cabinet packaging; power; and cooling systems. l


Bjoern Olausson


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36