Last week at the HPC supercomputer conference SC’16 in Salt Lake City Utah, Mellanox announced its 40-port, 200Gb/s HDR InfiniBand line of Quantum-based switches and dual-port ConnectX®-6 host bus adapters. To link them all together, we also announced, a line LinkX® Direct Attach Copper (DAC) cables and Mellanox’s Silicon Photonics-based, Active Optical Cables (AOCs) running at a whopping 200Gb/s and 8.0 Tb/s total switch I/O!
There is an old saying in New England, (I’m from Boston), “If you don’t like the weather, just wait a minute.” This saying is starting to apply to the high-speed interconnect space as well. Cloud computing, HPCs and now converged Cloud/HPCs are driving the link speeds improvements faster than at any other time in history. And if you wait another “minute”, they are going to double again. Traditional enterprise is just now moving from 1G/10G to 10G/40Gb/s while Cloud and HPC are rapidly moving to 100Gb/s and soon 200G and 400Gb/s. Now, that’s life in the fast lane!
LinkX® 200Gb/s InfiniBand HDR
The single-ended copper DAC and optical AOC cables support the new 200Gb/s InfiniBand HDR200 and 100Gb/s HDR100 speeds. For the first time in InfiniBand’s history, we have a double-ended, 1:2 DAC “splitter” or “breakout” cable that enables a single HDR200 port in a switch and breakouts to two HDR100 ports via the cable linking to host bus adapter cards in servers, storage and other subsystems at 100Gb/s. DAC and splitters offer a reach of 3 meters and 100 meters for the new AOC and both will be available in late 2017 to early 2018 along with the roll out of the new HDR switches and adapter cards.
The HDR200 DAC and AOC cables enable transferring 200Gb/s with 4 lanes of 50Gb/s each in a 4x50G configuration using the industry standard QSFP56 connector form-factor. This is the industry’s first 200Gb/s cabling solution using the new industry standard QSFP56 MSA form-factor connector. It is noteworthy that this is not the proposed and upcoming QSFP-DD or OSFP form-factors still in development. It is likely that the QSFP56-based systems will be shipping well before the QSFP-DD or OSFP transceivers are even sampling. The 1:2 splitter DAC cable separates the 4x50G in the switch port QSFP56 into two QSFP56’s using 2x50G in each cable.
The solution enables support for 40-ports of 200Gb/s HDR200 in a single Top-of-Rack switch using QSFP56-to-QSFP56 DAC and/or AOC cables. However, most servers today only need a maximum of 50Gb/s-100Gb/s I/O. Using the 1:2 DAC splitter cable, a whopping 80-ports of 100Gb/s HDR100 (or a mixture with HDR200 ports) can be supported delivering an unprecedented 8,000Gb/s (8Tb/s) of bidirectional I/O (16Tb/s total I/O) in a 1 RU Top-of-Rack switch.
That is enough I/O for a single InfiniBand Top-of-Rack switch to support at least 40 servers in a single rack at 100Gb/s each and with enough non-blocking, no oversubscription I/O of 4Tb/s of uplink bandwidth in 40-ports of 100Gb/s or 20-ports of 200Gb/s. To put this into perspective, the Mellanox 1RU InfiniBand switch and HBA progression looks like this:
Mellanox’s current generation of SB7800 EDR 100Gb/s InfiniBand switches offer 36-ports or 3.6 Tb/s I/O. With the new HDR QM8700 InfiniBand switch, it jumps to 80-ports of 100Gb/s or 40-ports of 200Gb/s at 8.0 Tb/s I/O. That is more than twice the performance in the same QSFP package and 1RU switch!
Inside, EDR uses 4-lanes of 25Gb/s where as HDR100 uses only 2-lanes of 50G – both transporting at 100Gb/s. EDR cables and transceivers can be used in the new HDR switches for backwards compatibility but the HDR100 is not compatible with the EDR because the line rate and number of lanes is different.
Direct Attach Copper (DAC) also known as Twinax, employs coaxial shielded copper wires to make a direct electrical connection between switches or switches and network adapter cards.
HDR/EDR DAC is the lowest cost, lowest latency, lowest power consuming method to create 200Gb/s or 100Gb/s links today. That’s a pretty big claim and here’s why: Since it is only a copper wire in the data path, there is no power consuming or latency delaying semiconductor components in the data stream to consume power or induce data delays. There is no way an active DAC using pre-emphasis semiconductors, an optical AOC or transceiver with complex lasers and semiconductors and opto-to-electronic conversions will ever beat the cost of a simple DAC copper wire and a solder ball! So, DAC is likely to hold this leadership position on cost for a long time. The new 200Gb/s HDR200 DACs supports up to 3 meter reaches and the 100Gb/s EDR generation up to 5 meters.
For the first time in InfiniBand history, Mellanox introduced a DAC splitter cable, also known as a breakout cable. This splits a single 200Gb/s cable with 4-lanes of 50Gb/s into two cables, each with 2 channels of 50Gb/s thereby allowing 200Gb/s HDR switch port to down link to 100Gb/s HDR100 ports in host bus adapter cards.
This is one more tool in the InfiniBand tool box to link subsystems together. Today, most high-end servers only need, at most, a 50Gb/s-100Gb/s link so the 1:2 splitter enables a single 200Gb/s HDR200 port in a switch to link to two subsystems bringing the total number of 100Gb/s HDR100 ports to a whopping 80-ports in a single Top-of-Rack switch.
Couple of gotch-ya’s in the fine print however:
Active Optical Cables (AOCs) convert the port electrical signals into optical pulses that are sent down an optical fiber to the other end where it is converted back again. This has the advantage of extending a high-speed link reach from 3-meter limitation of DAC to up to 100 meters – although the average reach used is typically <40 meters due to installation issues of threading a transceiver end over a long distance in a congested data center!
AOCs do not use optical connectors, as in optical transceivers, making it an easy-to-use, plug & play cable solution. So, there is no blizzard of optical technology acronyms to learn, no complex interoperability issues to master, matching of components or optical connectors to clean and maintain. This makes a plug & play easy-to-use cabling solution that can span 0.5 meters to 100-meter reach. DAC has a reach to 3-meters and AOCs from 3-to-100 meters picks up from there.
The AOC transceiver internal optics are based on the Mellanox-designed Silicon Photonics technologies and transceiver control ICs. By being vertically integrated for key components, it enables Mellanox to be first to market with the most advanced products. Even the Silicon Photonics laser is a Mellanox design and not the typical DFB laser.
Also, Mellanox manufactures its own DAC cables and AOCs in several locations around the world to maintain quality and high capacity to provide customers with the high unit volumes when they need it. In this way, we can maintain very high levels of quality and capacity and keep costs at the lowest levels.
Mellanox LinkX DAC and AOC cables offer the lowest-cost copper and optical links with the lowest-power and latency in the industry. This enables the most cost efficient, highest ROI, lowest Capex and Opex interconnect solution. The straight and splitter DACs, along with the extended reach AOCs, enable customers the ability to build a wide variety of configurations that meet every application need with up to 40-port of HDR200 between switches and subsystems or 80-ports of HDR100 to HBAs or any mixture of both up to 100-meters between components. Everything is manufactured by Mellanox and uses Mellanox-designed Silicon Photonics and control ICs.
Contact your Mellanox sales representative for availability and pricing options and stay tuned to my blog more interconnect news and tips.