Page 18
ManageMent
www.us-tech.com
Innovation to Counter the Need for Internet Speed Limits
By Gregory Finn, Vice President of Business Development, Rockley Photonics
and stored data. However, they do so at enormous cost, which will contin- ue to grow as bandwidth demands show no signs of leveling in the fore- seeable future. In 2015, annual glob- al IP traffic crossed the zettabyte threshold (that is, 1021 bytes). This quantity of data is simply beyond hu- man comprehension. One zettabyte of data transmission would be akin to watching Netflix’s entire HD video catalogue every day, for the next 3 million years. Our generation of massive vol-
U
umes of IP traffic is taking its toll. Data centers currently consume about 3 percent of the total global electricity supply — 416.2 terawatt hours last year. Current predictions suggest that we will need 1,000 times more bandwidth than we currently have just to keep up with the de- mand for Internet services over the next 10 years. We cannot deliver that
ntil now, data centers have managed to keep pace with vast quantities of generated
bandwidth economically with the da- ta center technology in place today.
Network Convolution The problem with mega-scale
data center networks is that they are built inefficiently with equipment that was never intended for net- works of this size. We can trace the complexity all the way down to the component level where hardware vendors still operate in silos and build network equipment with dis- crete CMOS switch chips, trans- ceivers and high-speed electronic chassis constructions. Switching is still mainly per-
formed with electronic CMOS switches, currently limited from 24 to 36 ports each. A data center with 100,000 or
more servers all needing connections requires a vast number of chips in multi-layer architectures to operate. All these components add to network cost, management complexity and power consumption, and do not scale
along with the rest of the data center. To a certain extent, this lack of
scalability can be alleviated using high-radix chassis switches. Howev- er, at a component level, these switches are little more than low- radix switching chips connected in- ternally in a Clos topology using ex- pensive high-speed circuit boards and backplane technologies, and are ultimately governed by the same in- efficient rules.
Scaling Efficiencies Rockley Photonics, a specialist
in the creation of scalable data center networks, uses an inte- grated combination of sili- con photonics and tradi- tional CMOS technology to lower the cost of data cen- ters and to power efficient scaling of the network. This is recognized as the path forward in the face of a slowing Moore’s Law that has served CMOS electronic advancements for generations. The packet switching
approach consolidates the majority of the functional elements (and value) of the network within a single module. The module con- tains the packet process- ing, switching, backplane and transceiver functional- ities into a fiber-in/fiber-out unit that can be interconnected to operate like a single switch. The switch’s size can range from standard up to unprece- dented levels of scaling capable of in- terconnecting all the racks in a mega-scale data center. This approach offers several
and better utilization of the compute resource at hand. As the server com- pute resource commands the vast majority of the overall DC power re- quirements, the realized savings can be substantial.
The Known Unknowns Photonics and fiber optics have
become the transport heroes in the central nervous system of the digital ecosystem, with the IT and telecoms industries investing billions in R&D each year. Rockley’s solution benefits from Moore’s Law scaling in CMOS technology and in the rapid advances
October, 2016
Rockley improves data centers
through a combination of photonics and CMOS technology.
in photonics. In time, these combined advances could reduce networking costs and power consumption by a factor of ten. Network infrastructure on its
benefits. As optical interconnects are already widely deployed for high- speed transmission between process- ing elements, the smart integration of the technologies and integrated optical interfaces maximizes the transmission of data in the optical domain. This alone reduces both cost and power consumption as the need for high-speed signals through cop- per in PCBs and backplanes is virtu- ally eliminated in the network. The port density is drastically
increased, and the modular architec- ture allows operators to scale the net- work in tandem with compute re- sources, without adding additional complexity. The CAPEX and OPEX benefits are inherent with greater port density as this allows the deploy- ment of much larger switches in the same space as competing solutions. A more efficient, lower latency
network enables workloads and bandwidth to be distributed more ef- fectively across the servers leading to more efficient compute functionality
own generally represents between 12 and 15 percent of data center power consumption, not including HVAC. Using the total global energy con- sumption of data centers last year, this could represent a savings of 45- 56 terawatt hours per annum, equiv- alent to the full capacity of nearly seven 1 gigawatt nuclear reactors. In order to meet the demands of
tomorrow in an environmentally re- sponsible way, we must adopt a mul- ti-faceted strategy. The software-de- fined data center (SDCC) is already driving greater efficiencies, while in- vestment in renewable energy is tak- ing an increasing number of data centers off the grid altogether. Through new thinking and the
application of new technologies throughout the data center network, it will be possible to recalibrate the cost trajectory of networking, and in turn, help decouple the inevitable growth of data centers from the re-
sources needed to power them. Contact: Rockley Photonics,
234 E Colorado Boulevard, Suite 600, Pasadena, CA 91101 % 626-304-9960 Web:
www.rockleyphotonics.com r
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80 |
Page 81 |
Page 82 |
Page 83 |
Page 84 |
Page 85 |
Page 86 |
Page 87 |
Page 88 |
Page 89 |
Page 90 |
Page 91 |
Page 92 |
Page 93 |
Page 94 |
Page 95 |
Page 96 |
Page 97 |
Page 98 |
Page 99 |
Page 100