This page contains a Flash digital edition of a book.
@fibresystemsmag | www.fibre-systems.com


FEATURE DATA CENTRE OPTICS


The traffic we see exchanged over telecom networks is only the tip of the iceberg


Over recent years, the prevailing industry


approach towards upgrading the capacity in data centres consists of introducing optical interfaces – typically based on vertical-cavity surface-emitting laser (VCSEL) technology operating over multimode fibre – while the existing Ethernet switching layer remains the same. Terefore, a major industry and academic effort is under way to identify technologies for cost-effective, power- efficient and high-capacity optical transceivers for the data centre – forecast to be a multi-billion-euro industry of the future. As data centres grow in capacity terms, they also


These colourful pipes send and receive water for cooling Google’s data centre in Douglas County, Georgia. Also pictured is a G-Bike, the vehicle of choice for team members to get around Google’s data centres


in order for the overall performance to keep scaling with such distributed architecture, there would be a need for significant increases in the available capacity of the communication links that enable these distributed data centres to work together. Even though most of the traffic stays within the


data centre, today, multi-terabit capacities are already required to support data centre interconnect (DCI), which has led to the development of new purpose-built platforms. Considering the predicted increase in IP traffic, in the near future the capacity requirements for the traffic leaving the data centre could become so enormous that current technology might have trouble keeping up. Just take into account reports that the web-scale


data centre players are proposing to build data centres with more than 400,000 servers before the end of this decade. If we also assume that the server interface speeds increase in that time-frame to, say, 25Gb/s and if indeed 15 – 20 per cent of this traffic is now DCI traffic, then the total interconnection capacity across such distributed data centres would soon reach petabit levels.


Attention to architecture Let’s focus for now on the intra-data centre network. Typically, the servers in each rack within the data centre are connected using a top-of-rack (ToR) switch, while several ToR switches are connected through one or more aggregate switches,


and several aggregate switches may be connected using core switches. Currently, such networks are based on high port-density Ethernet switches, typically forming a meshed network based either on the hierarchical ‘fat-tree’ architecture or on a flatter, two-tier spine-leaf architecture (Figure 1). Now imagine how this would look in reality for a data centre with hundreds of thousands of servers. Te majority of these data centre networks are


based on ethernet, which has seen widespread adoption thanks to its low price, high volumes and technology maturity. So far, electronic ethernet switching has managed to keep up with the requirements in terms of high capacity utilisation and low latency. However, as data centres keep increasing in size, capacity and computing power, this increase will be accompanied by an inevitable increase in power consumption and bandwidth requirements. Tis remarkable growth will change the


landscape of electrical packet switching by pushing it to its limits in terms of port counts and aggregated bandwidth. Current technology and architecture will soon approach the point at which further scaling would be very costly and environmental unfriendly, due to the bandwidth limitations of electronic circuit technologies, as well as their enormous power consumption. As a result, in order to ensure the long term evolution of data centres, new approaches for data centre networks and switching are needed.


occupy a much larger physical footprint, so the connections between switches can exceed the distance supported by typical 10 Gigabit Ethernet transceivers. Mega data centres with floor space approaching half a million square feet may require interconnect lengths between 500m and 2km. To scale interconnect speeds beyond 10Gb/s, while also supporting these long distances, transmitting


Table 1: How big are today’s data centres? Facebook:


Lulea, Sweden Facebook:


Altoona, Iowa Google:


Mayes County, Oklahoma Apple:


Maiden, North Carolina Switch:


SuperNap, Las Vegas-7 Microsoft:


Northlake, Illinois Source: company websites 290,000 sq ft 476,000 sq ft


~500,000 sq ft (estimated)


505,000 sq ft 515,000 sq ft 700,000 sq ft


Wembley Stadium: 172,000 square feet


Issue 11 • Spring 2016 FIBRE SYSTEMS 23


Google


Andrew Barker/Shutterstock.com


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44