search.noResults

search.searching

saml.title
dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
Feature sponsored by Test & measurement


cooling, or more precisely, rejecting the heat from the servers, and the cooling cost alone can easily represent up to 25 per cent or more of total annual energy costs. Cooling is of course a necessity for maintaining IT functionality, and this can be optimised by good design and by the effective operation of building systems. An important recent trend is an increase in server rack power density, with some as high as 30 to 40 kilowatts and above. According to research conducted by AFCOM, the industry association for data centre professionals, the 2020 State of the Data Centre report found that average rack density jumped to 8.2 kW per rack; up from 7.3 kW in 2019 and 7.2 kW in 2018. About 68 per cent of respondents reported that rack density has increased over the previous three years. The shift towards cloud computing is certainly boosting the development of hyperscale and co- location data centres. Historically, a one-megawatt data centre would have been designed to meet the needs of a bank or an airline or a university, but many of these institutions and companies are now shifting to cloud services within hyperscaler and co-location data centre facilities. As a result of this growing demand, there is an increased requirement for data speed, and of course all of these data centres are serving mission- critical applications, so the reliability of the infrastructure is very important.


There is also an increased focus on edge data centres to reduce latency (delay), as well as towards the adoption of liquid cooling to accommodate high performance chips and to reduce energy use.


during a typical year. While PUE may not be a perfect tool, it is increasingly being supported by other measures such as WUE (water usage effectiveness), CUE (carbon usage effectiveness), as well as approaches that can enhance the relevance of PUE, including SPUE (Server PUE), and TUE (Total PUE).


DATA CENTRE TRENDS


In the past decade efficient hyperscale data centres have increased their relative share of total data centre energy consumption, while many of the less efficient, traditional data centres have been shut down. Consequently, total energy consumption has not yet increased dramatically. These newly built hyperscale data centres have been designed for efficiency. However, we know that there will be a growing demand for information services and computer-intensive applications, due to many emerging trends like AI, machine learning, automation, driverless vehicles and so on. Consequently, the energy demand from data centres is expected to increase, and the level of increase is the subject of debate. According to the best-case scenario, in comparison with current demand, global data centre energy consumption will increase threefold by 2030, but an increase of eightfold is believed more likely. These energy consumption projections include both IT and non-IT infrastructure. The majority of non-IT energy consumption is from


Instrumentation Monthly March 2023


TEMPERATURE AND HUMIDITY CONTROL One of the primary considerations for energy efficiency in air-cooled data centre cooling is hot aisle/cold aisle containment. Sadly, containment is still poorly managed in many legacy data centres, leading to low energy efficiency. New data centre builds, on the other hand, tend to take containment very seriously, which contributes significantly to the performance. For many, supply air temperatures are optimally between 24 and 25.5°C. However, delta-T is very important - the temperature differential between the hot aisle and the cold aisle. Typically, delta-T is around 10 to 12 °C, but 14°C is a common objective in data centre design. Increasing delta-T results in dual benefit of reducing the fan motor energy use required by the cooling systems, as well as increasing the potential for economising heat rejection strategies.


Economisation is the process by which outdoor air can be utilised to facilitate a portion of data centre heat rejection. Economisation can occur directly, where outdoor air is actually brought into the cooling systems and delivered to the servers (after appropriate air filtration), or indirectly, where recirculating data centre air is rejected to ambient by way of an air-to-air heat exchanger. This lowers costs and improves efficiency and sustainability. However, to maintain efficiency, air side pressure drops, due to filtration, should be minimised. So, if air is recirculated within the data centre, without the introduction of outside air, it


should be possible to reduce or eliminate the need for filtration altogether.


Cooling and ventilation require careful control, and it is important to deploy high efficiency fans, maintain a slight positive building pressure, and to control room humidity. For example, make-up air systems should control space dew point sufficiently low such that cooling coils only undertake sensible cooling, without having to tackle a latent load (removal of moisture from the air). The overall objective of the heat rejection system is to maintain optimal conditions for the IT equipment, whilst minimising energy usage. Low humidity, for example, can increase the risk of static electricity, and high humidity can cause condensation, which is a threat to electrical and metallic equipment; increasing the risk of failure and reducing the working lifetime. High humidity levels combined with various ambient pollutants have been shown to accelerate corrosion of various components within servers. Cooling is essential to remove the heat generated by IT equipment; to avoid over-heating and prevent failures. According to some studies, a rapidly fluctuating temperature can actually be more harmful for the IT devices than a stable higher temperature, so the control loop is important from that perspective.


The latest IT equipment is typically able to operate at higher temperatures, which means that the intake temperature can be raised and the potential for free cooling and economisation is improved. Outdoor air can be utilised to cool indoor air either directly or indirectly (as noted earlier), and evaporative or adiabatic cooling can further improve the efficiency of economisation. These energy saving technologies have been extensively deployed, with the trend being towards dry heat rejection strategies that consume no water. As the temperature of the heat extraction medium (air or liquid) rises, the potential for waste heat from data centres to be effectively used increases, allowing for example, its use in district heating networks. In Helsinki, for example, Microsoft and the energy group Fortum are collaborating on a project to capture excess heat. The data centre will use 100 per cent emission- free electricity, and Fortum will transfer the clean heat from the server cooling process to homes, services and business premises that are connected to its district heating system. This data centre waste heat recycling facility is likely to be the largest of its kind in the world.


THE IMPORTANCE OF ACCURATE MONITORING


In many modern facilities 99.999 per cent uptime is expected; representing annual downtime of just a few minutes per year. These extremely high levels of performance are necessary because of the importance and value of the data and processes being handled by the IT infrastructure. A key feature in data centre design is the delivery of the correct temperature to servers, and this can only be achieved if the control system is able to rely on accurate sensors. Larger data halls can be more challenging to monitor because they have a greater potential for spatial temperature


Continued on page 30... 29


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80  |  Page 81  |  Page 82