Interconnection
Next-generation datacentre connectivity & thermal management
Joe Dambach, new product development manager, Molex tells CIE about new connectivity and thermal management options available for datacentres
C
hips, switches, connectors, cables and optical module technologies are at the core of datacentre networks, which must be able to support increasingly faster processing, higher bandwidth and density. A range of pluggable I/O solutions
are designed to support the fastest data rates across the spectrum of distances found in datacentres and
telecommunications, with lower latency and insertion loss, and excellent signal integrity, electromagnetic interference
protection and thermal management. Bandwidth requirements for wireless devices have been a catalyst for higher density designs in server farms. High-speed, high-density pluggable I/O solutions provide a highly scalable upgrade path. Supporting next-generation 100 Gbps Ethernet and 100 Gbps InfiniBand EDR applications, the zQSFP+ interconnect transmits up to 25 Gbps per-serial lane data rates – making it a popular choice in datacentre HPC, switches, routers and storage. As data usage rises, network switch technologies keep data flowing smoothly. High-density connectors can relieve pressure on core switches. There are already silicon chips on the market that support 256 differential lanes. The missing link is a connector form factor providing adequate density to support this lane count in a 1RU box, while managing thermal and signal integrity. The QSFP-DD MSA set out to address the technical challenges of achieving a double-density interface.
The QSFP-DD specification defines a module, a stacked integrated cage/connector system and a surface mount cage/connector system, which expands the standard QSFP four-lane interface by adding a row of contacts providing for an eight-lane
electrical interface, each operating up to 25 Gbps NRZ or 50 Gbps PAM4 modulation. This allows the QSFP-DD to address solutions up to 200 Gbps or 400 Gbps aggregate per QSFP-DD port. A single switch slot can support up to 36 QSFP-DD modules – providing up to 14.4 Tbps aggregate capacity.
As transceiver speeds increase, the energy it takes to drive signals increases and creates more heat. Thermal cooling is critical for connector module and overall network power consumption, energy efficiency, performance and longevity. Advances in heat sink technologies enable highly efficient, reliable and resilient thermal management strategies to support both higher density copper and optical connectivity.
The QSFP-DD module drives eight lanes in a space only slightly deeper than the QSFP. That means eight lasers creating heat coming out of the module – effectively doubling the thermal energy. Managing thermal performance at extremely high heats will be critical moving forward. Advanced thermal management techniques used in the design of the module and cage enable the QSFP-DD to support power levels of at least 7W, with a target range up to 10W. Additional work is underway to develop solutions for cooling the QSFP-DD module operating at up to 12W or higher. MSA partners are already in the process of tooling QSFP-DD products including modules and cages available for thermal testing in customer environments. The new optical transceiver stands to bring exceptional value by enabling manufacturers to produce more competitive network server, switch and storage solutions to support rising data traffic volumes.
www.molex.com
www.cieonline.co.uk
Components in Electronics
December 2018/January 2019 17
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52