This page contains a Flash digital edition of a book.
DCA REVIEW Operational

Professionalism Operational professionalism in global operations

By Mike Bennett, VP global data centre acquisition and expansion for CenturyLink.

COLOCATION INFRASTRUCTURE that extends around the world enables the easy delivery of the security, space, power and bandwidth needed to keep big businesses running smoothly. For many, the location of data centres is hugely important for geographic diversity and redundancy, IT asset protection and equipment failover.

However, the question always comes up: “why does it matter that a provider has over 50 data centres around the world if I need just one rack?” A fair question that has a lot to do with operational procedure and professionalism.

Firstly, and perhaps most importantly, having facilities around the world mitigates risk. How? Through knowledge sharing. Working to two types of incidents; the ‘abnormal incident’ and ‘avoidable error’ means that appropriate processes are put in place to ensure ‘one offs’ remain exactly that and not repeated time and time again by those that simply have not had a chance to learn from colleagues around the world. Humans will always make mistakes. The difference between one person learning from their

mistake, and an entire global operation learning, is huge and requires a high level of professionalism. When an ‘avoidable error’ is recorded and shared, not as a reprimand, but as a method of knowledge sharing, processes and procedures can be put in place to benefit everyone. The lesson can come from the other side of the world or around the corner – it doesn’t matter. Essentially, everyone learns together and develops at a much faster rate as a result. When it comes to ‘abnormal incidents’; patterns are spotted almost immediately.

A ‘one off’ recorded as an abnormality but shared with 55 other data centres across the world can quickly reveal patterns. What could be neglected, and evolve into a serious problem, is able to be rectified without causing a minute of disruption.

Having a global operation also affords the luxury of having a team of people that specifically look into such issues. A lack of time or resource is never an issue. The flip side is that innovation is also shared globally. If a team member comes up with a great idea that works, then two-three weeks later

it can be implemented across the world. If innovation happens anywhere, everyone benefits.

One of the most important advantages to global knowledge sharing is of course value for money. Power costs have gone through the roof. Back in the day, small efficiency improvements wouldn’t have been anything other than best practice – you wouldn’t reap a huge benefit. Now, those little efficiencies have a real, tangible impact on the bottom line. So much of the data centre is driven by what the engineers do on the floor and it’s sharing of little modifications and best practice that really make a difference to the bottom line.

Data centre design is key to fostering such an environment. When you have a Ferrari, you want to make sure it’s looked after. Having fellow owners and enthusiasts to bounce concerns and ideas off of benefits everyone. And let’s not forget that healthy internal competition can go hand in hand with professionalism. Engineers take real pride in their work and sharing their expertise with team mates.


Pushing for operational excellence By Duncan Clubb, Managing Director, Align

DATA CENTRES may resemble massive structures made of bricks, steel, concrete and other hard stuff, but they can sometimes seem remarkably fragile and easy to break. And when data centres fail, all sorts of hell can break loose. The head of a data centre that specialises in hosting 70,000 small businesses recently told of how, if his data centre suffered an outage, within an hour 70,000 tweets hit the internet complaining about his company, and 70,000 businesses

18 I February 2015

were without their web sites. You may also have seen the recent news about a good sized chunk of Western Australia losing internet access because of a data centre outage at one of its biggest ISPs.

A lot of time and money is spent designing data centres that should not fail – they have resilience designed and built into every critical system, but yet they still fail. Sometimes it can be put down purely to

random failures of equipment, or other unpredictable failures and unfortunate coincidences, but most estimates are that at least 70-80% of data centre outages are down to human error. This means that no matter how well designed or built your data centre is, it is always going to be at risk of failure. That is because we cannot design data centres that run and maintain themselves automatically. Humans are always required at some level or other to

Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52