FEATURE DATA CENTRE MANAGEMENT ENERGY EFFICIENT UPGRADE
Comtec provides Schneider Electric hardware and software data centre solution for major improvements in data centre management and energy efficiency at Cardiff University
C
ardiff University’s HPC data centre fulfils a number of disparate roles,
from housing the servers that provide applications and storage for the university’s general IT needs, through to hosting a high-performance computing cluster called Raven. The cluster is operated by the Advanced Research Computing at Cardiff (ARCCA) division and supports computationally intensive research projects across several Welsh universities. In addition, it houses the Cardiff Hub of the distributed High Performance Computing Wales service. The differing computing needs of the
Cardiff data centre impose challenges on its support infrastructure, including the power supplies and their backup UPS systems, the necessary cooling equipment and the racks containing IT and networking equipment. To help keep the energy costs associated with running a state-of-the-art data centre to a minimum, Cardiff University has installed advanced StruxureWare Data Center Operation: Energy Efficiency management software from Schneider Electric, so that it can tightly monitor all the elements of its infrastructure and ensure maximum efficiency. The University has worked from the outset with Comtec, partner to Schneider Electric, to design and populate its data centre. To maximise the efficiency of the system, the strategy has been to co- ordinate rack density and layout, with a
10 APRIL 2016 | ELECTRICAL ENGINEERING
‘close-coupled’ chilled-water cooling solution and hot aisle containment (HACS). This involved using components from Schneider Electric’s InfraStruxure (ISX) data-centre physical infrastructure solution, in conjunction with high efficiency chillers. A critical element in managing the
efficiency of the Cardiff data centre is Schneider Electric’s StruxureWare for Data Centres software suite (DCIM). In use, DCIM gives insight into the power use by the data centre and the cooling capacity utilisation, allowing management to respond to changes and to calculate metrics such as Power Usage Effectiveness (PUE) in real time.
HIGH PERFORMANCE REQUIREMENTS Hugh Beedie, chief technology officer for both the ARCCA and the general IT services department at Cardiff University, worked closely with the ARCCA team to ensure the HPC Datacentre infrastructure, for Cardiff University and Welsh researchers, is truly fit for purpose. Beedie was part of the team
responsible for ensuring that part of the performance specifications for the HPC Infrastructure was that it was to be as energy efficient as possible, while also being functionally advanced from a computing viewpoint. This approach has paid off quickly as, quite soon after its opening, the data
centre became part of the HPC Wales initiative. This meant ARCCA was required to take on additional computing equipment and saw the utilisation of the two contained server racks in the data centre increase from 40 per cent to 80 percent capacity. The cooling systems were upgraded at the same time. Subsequent to the upgrade, the
operators noted, from their initial energy monitoring, that the PUE of the data centre was deteriorating. They could only surmise that it was a compounding problem that was causing this. However, there was not enough insight into how each element of the system was performing, in order to pinpoint the reasons for the decline in efficiency. To meet these additional requirements Beedie knew that the infrastructure would have to be improved. When making the case for a second upgrade to the cooling system, Beedie realised that improved power efficiency, as evidenced by a better data centre PUE, could also result in energy savings that would offset the additional investment cost over time. Assuming an annual PUE of 1.7,
which was very much a best-case scenario, Beedie calculated that reducing the PUE to 1.4 would see the cost of the cooling upgrade pay for itself easily over the working lifetime of any new equipment. Continues on Page 13...
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64