This page contains a Flash digital edition of a book.
know how Ready for anything?


Nick Taylor of Panduit discusses 10 Gigabit readiness for the physical infrastructure of the data centre in the form of I/O consolidation at rack level.


storage equipment densities have increased dramatically. It has become increasingly difficult to specify cost


T


efficient power and cooling solutions that keep up with new applications. Technology evolution in the data centre is challenging IT managers to effectively budget and develop reliable, high performance, secure, and scalable network infrastructures. Advances in networking and processing require innovative physical layer solutions to support these systems. 10Gb/s transport rates over Ethernet are a reality, with higher rates in development that are likely to be available by 2010. Server consolidation through virtualisation, the growing density of network storage systems, and the bigger and multiple workloads allowed by multicore architectures all drive demand for higher bandwidth network connections. A key trend with data centres is consolidation. IT


organisations are consolidating multiple applications onto a smaller number of more powerful servers using virtualisation technology. The benefits range from a reduced need for data centre space to reduced power and cooling expenses. Forward looking enterprises are preparing for the next wave: input-output (I/O) consolidation, which leverages a unified 10Gb/s fabric at rack level to run LAN, SAN, and interprocess communication (IPC) protocols over the same cabling infrastructure. Many data centres operate multiple parallel networks: one for IP networking, one for block I/O, and a third for server-to-server IPC protocols used by high performance computing (HPC) applications. Running multiple networks has a tremendous impact on capital expenditures. Several attempts at merging I/O into one consolidated physical infrastructure have been adopted for some data centre applications, such as InfiniBand for cluster and grid computing, but these technologies can be more costly than 10 Gigabit Ethernet. An enabling consolidation option is 10 Gigabit


Ethernet, which brings high performance at a competitive cost to the data centre. Extremely fast transmission rates are typically first available over fibre connectivity, and become available over copper as the market for silicon derived components increases. Today 10Gb/s transports are available over copper and fibre cabling, and both Ethernet and FCoE standards are ready for the transition to 10Gb/s transmission speeds. With 40 Gb/s and 100 Gb/s data rates on the horizon for even faster next generation networking, it is clear that Ethernet is an effective, high performance I/O consolidation platform now and well into the future.


36


raditional concerns of data centre stakeholders are keeping their applications running and their networks scalable, available, and secure. However, over the last five years, demand for processing power has surged and server/


The goal of I/O consolidation is to create a data


centre environment that provides anytime, anywhere access to content over a single cabling infrastructure. Enhancements to 10 Gigabit Ethernet represent a significant opportunity to improve data centre efficiencies. A set of Ethernet enhancements termed Data Centre


Ethernet (DCE, also known as Converged Enhanced Ethernet, or CEE) uses the bandwidth of 10 Gigabit Ethernet technologies to make Ethernet a unified fabric for I/O consolidation in the data centre. Backed by major industry heavyweights, the features of DCE make the Ethernet approach compelling: • 10Gb/s and higher data rate. • Lossless, easy-to-manage transport via flow control and congestion management.


• Layer 2 multipathing for unicast and multicast traffics. • DCBCXP (Data Centre Ethernet Capabilities Negotiation). • Enables larger Layer 2 network domains.


One primary data centre application made possible


by DCE is FCoE, a standards based encapsulation of FiberChannel onto an Ethernet fabric. The FCoE specification transports native FiberChannel frames over an Ethernet infrastructure, allowing existing FiberChannel management modes to stay intact when used over Ethernet networks. FCoE requires the underlying network fabric to be lossless


(i.e., no packet drops). While this can be implemented using the existing Ethernet PAUSE mechanism, DCE features enhance the ability for the network to carry multiple protocols, each with different requirements. Priority flow control supports class-of-service based flow control with the ability to implement lossless transmission on a per flow basis. DCE congestion management techniques help to prevent congestion from reaching the network core, and bandwidth management supports traffic shaping that comes into play only when resources become scarce. FCoE is based on the FiberChannel model, so standard fabric services (including host-to-switch and switch-to- switch behaviour) and features remain unchanged. A new generation of network adaptors function like


a separate Ethernet Network Interface Card (NIC) and FiberChannel Host Bus Adapter (HBA) to the host operating system, transparently encapsulating FiberChannel alongside IP traffic over the same Ethernet channels before passing both traffic flows to the network. These adapters can use existing operating system drivers, making integration seamless with the use of existing software and management models. The FCoE encapsulation process is accomplished by using one of two types of FCoE-capable adaptors. The first type of adaptor is based on Application-Specific Integrated Circuit (ASIC) technology: ASIC-based adaptors called Consolidated Network Adapters (CNAs) are capable of outstanding encapsulation performance in addition to advanced virtualisation when used with Cisco Nexus 5000 Series switches. CNAs based on this technology use pluggable transceivers in the form factor of either SFP+ or XFP to support future standards. Another approach to supporting FCoE is using a


standard 10 Gigabit Ethernet NIC combined with a driver capable of performing the FCoE encapsulation. The FCoE software stack is based on an OpenFC open source project, and the underlying Ethernet layer is handled by a 10 Gigabit Ethernet ASIC.


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48
Produced with Yudu - www.yudu.com