search.noResults

search.searching

note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
A question of scale


The Jülich Supercomputing Centre (JSC) at Forschungszentrum Jülich in Germany has been operating supercomputers of the highest performance


class since 1987. Tim Gillett talks to Norbert Attig and Thomas Eickermann


What separates JSC from the other Gauss centres? For many years, we have been dealing with highly scaleable systems. Within Gauss it was decided that we should have different architectures at the different centres, and from the very beginning we decided that we should concentrate on scaleable systems. Te other things that distinguish us is


our unique support environment. We were the first to become aware that allocating a single adviser to a particular project was not helpful when you look at scaleability, community codes, and so on – you need a structure that retains knowledge about a specific project over a longer period of time. Tis is the reason that we changed our whole support concept in 2007, and came up with the idea of the similar, or simulation, laboratory. While the Simlab is a unit that consists


of HPC experts working together with technicians, these HPC experts have a knowledge of one specific discipline. Tey are working in one particular field, so they are able to provide support for their community, while also doing research in this field. We started with plasma physics in the


very beginning, we also have fluid and solid engineering, biology, molecular systems – a whole set of Simlabs that came over time and were evaluated by the community. Our research areas are around the basic natural sciences: chemistry, biology, engineering – and over the last couple of years it turned


26 SCIENTIFIC COMPUTING WORLD


out that users from earth systems science and also from neuroscience joined our user community to make use of our systems. From the past we also have very strong


links to the quantum chromodynamics (QCD) community and in the beginning the Blue Gene was seen primarily as a QCD machine – but it turned out, with our first test, that it should not only be dedicated to this sort of community because many other communities were clearly able to take advantage of this machine. Te other distinction from the other two


Gauss centres is that we have no formal connection to a university; we are focused almost completely on supercomputing, rather than supporting a university IT department – though, of course, we do have some university links.


What drove the decision to use IBM Blue Gene for the main HPC system? We initially took up the Blue Gene technology in 2005. At that point in time, most of the large supercomputer centres employed IBM Power 4 systems and so did we – and it was a very successful series of systems – but what became obvious for us was that this would not be suitable architecture if we wanted to move towards successful, highly-scaleable systems because of concerns over price/performance and energy consumption/performance. In that respect, we had come to a sort of dead end and we were looking for alternatives.


OUR RESEARCH AREAS ARE AROUND THE BASIC NATURAL SCIENCES: CHEMISTRY, BIOLOGY, ENGINEERING


Ten, in 2005, the first Blue Gene system


was announced by IBM and we just took the opportunity to buy the smallest system you could get. We tested it and it was surprisingly successful. In several steps we extended that system and also went through the different Blue Gene generation changes, up to the Blue Gene Q, which we have now. By growing slowly, we were also able to take our universe with us; it was a continuous process of change. Power 4 was a very powerful system but


consumed a lot of energy, whereas Blue Gene was the opposite – the processors were individually a bit slower, having less memory, requiring more of them to be installed on a single rack. Users had to adapt


@scwmagazine l www.scientific-computing.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32