Maintenance & servicing
www.heatingandventilating.net
Maintenance best practice
Maintaining the health and condition of the mechanical cooling systems in a datacentre ensures that back-up plant works should breakdowns occur. Matthew Philo, product manager at FläktGroup, explains why equipment manufacturers are best placed to effectively maintain datacentres’ cooling assets, and advises on best operational practice to maximise the availability of critical IT services
into a maintenance agreement with a building facilities provider often based solely on cost. However, the quality of the servicing and maintenance affects how well the equipment works, under both normal and unplanned conditions. What’s more, how the climate control systems are managed daily significantly impacts how well back-up plans can be executed. It pays to work with a specialist manufacturer in the long term to keep indoor temperature and humidity levels within the recommended range, and ensure that these remain unaffected should other critical services fail.
A Specialist expertise
Although engineers who have attended short training programmes such as F-Gas courses can work with air conditioning and refrigeration systems, close control units in mission-critical environments are best maintained by manufacturer- trained engineers. General air conditioning systems differ from Computer Room Air Conditioning (CRAC) used in datacentres. The former is designed to deliver a comfortable interior environment in areas with high footfall. The latter’s sole purpose is to control a datacentre’s indoor climate so that IT hardware can operate reliably and efficiently around the clock. If it goes wrong, critical infrastructure can go offline resulting in major financial losses. This is why close control cooling is engineered to maintain temperature, humidity and air filtration levels within a much tighter range. It is therefore important for servicing and maintenance technicians to have a detailed knowledge of how precision control units work, and to understand their impact in a mission-critical setting. What’s more, products vary from one manufacturer to the next, which is why many develop detailed servicing and maintenance work schedules tailored to the needs of their ranges. At FläktGroup all engineers hold a minimum of
April 2018
fter installation, responsibility for the upkeep of a data centre cooling plant is often handed over to an IT manager, who will then enter
Redundancy strategy
NVQ level 2. They undergo in-house training on a regular basis and follow a comprehensive schedule every time they carry out servicing. Where non- manufacturer approved engineers are used, it is crucial to pay attention to the following elements which tend to get overlooked.
Refrigerant quantity
Refrigerant charges within CRAC units are critical and robust modern methods include the reading of suction superheat, liquid sub-cooling and discharge superheat, throughout the compressor operating envelope, are all now required. Overcharging of these systems has become
commonplace and is usually due to a lack of experience and knowledge. Failure to get this element correct can result in compressor failure.
Condenser coils
Cleaning condenser coils is an essential task in any planned preventative maintenance regime. Depending on the type of dirt and contaminants which need to be removed, the coils can be cleaned by one of three methods: chemical cleaning, manual brushing or a pressure wash. Despite the simplicity of this procedure, it can often be overlooked by technicians who don't fully understand the principles of close control cooling units and the role they play in a datacentre. If left uncleaned, the coils will clog up, resulting in
refrigerant high pressure trips and ultimately, system downtime.
In addition to proper maintenance, best practice should also be applied during normal operation of the cooling facility. This ensures back-up equipment kicks in when needed, and protects any management systems that control the cooling from intrusions. Back-up units: one or two additional units are often specified to provide back-up should any unit fail. To ensure that duplicate units can function and help meet the cooling load needed, the operator can set them up to work on a rotational basis and control changeovers. By spreading the load, the risk of breakdowns in the normal units can also be reduced. Network security: to protect the network of the
cooling system from cyber-attacks, all units, including duplicate ones, should be linked to the datacentre’s IT infrastructure. FläktGroup’s solutions can be connected to a building’s network via a pCOWEB card, which restricts access to the units' IP addresses, and allows the datacentre operator to monitor the system's security. Power outage: if power supply to cooling equipment is interrupted, fans, compressors and humidifiers will stop working. Therefore, any cooling solution used in datacentres should be able to switch to a different feed in case of mains power outage. Modern units have a remote start stop signal that can be connected to a back-up generator via the building management system. An Auto Transfer Switch can also be built into the units so that they have dual power supply and can run on electricity from an UPS or generator if needed. Non-IT infrastructure plays a crucial role in
keeping a datacentre operational and represents a significant proportion of hardware investment. Maintaining, servicing and running cooling facilities on the cheap may cost less in the short term, but will jeopardise the availability of both regular and redundant equipment in the long-run, which could lead to much higher losses down the line. By working closely with manufacturers,
datacentre managers can be advised on best operational practices and have greater reassurance that redundant units can take on the required load during unplanned outages.
www.heatingandventilating.net
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64