Mellanox Launches 200Gb/s HDR InfiniBand AOC and DAC LinkX® Cables

Cables, Ethernet, Events, InfiniBand, Link-X, , , , , , , , , , , , , , ,

Just when you started to get familiar with 100Gb/s EDR–along comes 200Gb/s HDR doubling the bandwidth

Quick Summary

Last week at the HPC supercomputer conference SC’16 in Salt Lake City Utah, Mellanox announced its 40-port, 200Gb/s HDR InfiniBand line of Quantum-based switches and dual-port ConnectX®-6 host bus adapters. To link them all together, we also announced, a line LinkX® Direct Attach Copper (DAC) cables and Mellanox’s Silicon Photonics-based, Active Optical Cables (AOCs) running at a whopping 200Gb/s and 8.0 Tb/s total switch I/O!

There is an old saying in New England, (I’m from Boston), “If you don’t like the weather, just wait a minute.”   This saying is starting to apply to the high-speed interconnect space as well. Cloud computing, HPCs and now converged Cloud/HPCs are driving the link speeds improvements faster than at any other time in history. And if you wait another “minute”, they are going to double again. Traditional enterprise is just now moving from 1G/10G to 10G/40Gb/s while Cloud and HPC are rapidly moving to 100Gb/s and soon 200G and 400Gb/s. Now, that’s life in the fast lane!

LinkX® 200Gb/s InfiniBand HDR

photo of LinkX DAC, Splitter, and AOCs

LinkX® is Mellanox’s brand name for its cables and transceiver product line.

The single-ended copper DAC and optical AOC cables support the new 200Gb/s InfiniBand HDR200 and 100Gb/s HDR100 speeds. For the first time in InfiniBand’s history, we have a double-ended, 1:2 DAC “splitter” or “breakout” cable that enables a single HDR200 port in a switch and breakouts to two HDR100 ports via the cable linking to host bus adapter cards in servers, storage and other subsystems at 100Gb/s. DAC and splitters offer a reach of 3 meters and 100 meters for the new AOC and both will be available in late 2017 to early 2018 along with the roll out of the new HDR switches and adapter cards.

The HDR200 DAC and AOC cables enable transferring 200Gb/s with 4 lanes of 50Gb/s each in a 4x50G configuration using the industry standard QSFP56 connector form-factor. This is the industry’s first 200Gb/s cabling solution using the new industry standard QSFP56 MSA form-factor connector. It is noteworthy that this is not the proposed and upcoming QSFP-DD or OSFP form-factors still in development. It is likely that the QSFP56-based systems will be shipping well before the QSFP-DD or OSFP transceivers are even sampling. The 1:2 splitter DAC cable separates the 4x50G in the switch port QSFP56 into two QSFP56’s using 2x50G in each cable.

So, What?

The solution enables support for 40-ports of 200Gb/s HDR200 in a single Top-of-Rack switch using QSFP56-to-QSFP56 DAC and/or AOC cables. However, most servers today only need a maximum of 50Gb/s-100Gb/s I/O. Using the 1:2 DAC splitter cable, a whopping 80-ports of 100Gb/s HDR100 (or a mixture with HDR200 ports) can be supported delivering an unprecedented 8,000Gb/s (8Tb/s) of bidirectional I/O (16Tb/s total I/O) in a 1 RU Top-of-Rack switch.

That is enough I/O for a single InfiniBand Top-of-Rack switch to support at least 40 servers in a single rack at 100Gb/s each and with enough non-blocking, no oversubscription I/O of 4Tb/s of uplink bandwidth in 40-ports of 100Gb/s or 20-ports of 200Gb/s. To put this into perspective, the Mellanox 1RU InfiniBand switch and HBA progression looks like this:



Mellanox’s current generation of SB7800 EDR 100Gb/s InfiniBand switches offer 36-ports or 3.6 Tb/s I/O. With the new HDR QM8700 InfiniBand switch, it jumps to 80-ports of 100Gb/s or 40-ports of 200Gb/s at 8.0 Tb/s I/O.  That is more than twice the performance in the same QSFP package and 1RU switch!

Inside, EDR uses 4-lanes of 25Gb/s where as HDR100 uses only 2-lanes of 50G – both transporting at 100Gb/s. EDR cables and transceivers can be used in the new HDR switches for backwards compatibility but the HDR100 is not compatible with the EDR because the line rate and number of lanes is different.

DAC Cables

Direct Attach Copper (DAC) also known as Twinax, employs coaxial shielded copper wires to make a direct electrical connection between switches or switches and network adapter cards.


HDR/EDR DAC is the lowest cost, lowest latency, lowest power consuming method to create 200Gb/s or 100Gb/s links today. That’s a pretty big claim and here’s why: Since it is only a copper wire in the data path, there is no power consuming or latency delaying semiconductor components in the data stream to consume power or induce data delays. There is no way an active DAC using pre-emphasis semiconductors, an optical AOC or transceiver with complex lasers and semiconductors and opto-to-electronic conversions will ever beat the cost of a simple DAC copper wire and a solder ball!  So, DAC is likely to hold this leadership position on cost for a long time. The new 200Gb/s HDR200 DACs supports up to 3 meter reaches and the 100Gb/s EDR generation up to 5 meters.

New 1:2 DAC Splitters

For the first time in InfiniBand history, Mellanox introduced a DAC splitter cable, also known as a breakout cable.  This splits a single 200Gb/s cable with 4-lanes of 50Gb/s into two cables, each with 2 channels of 50Gb/s thereby allowing 200Gb/s HDR switch port to down link to 100Gb/s HDR100 ports in host bus adapter cards.



This is one more tool in the InfiniBand tool box to link subsystems together. Today, most high-end servers only need, at most, a 50Gb/s-100Gb/s link so the 1:2 splitter enables a single 200Gb/s HDR200 port in a switch to link to two subsystems bringing the total number of 100Gb/s HDR100 ports to a whopping 80-ports in a single Top-of-Rack switch.

Couple of gotch-ya’s in the fine print however:

  • The HDR100 end only works in the ConnectX-6 HBA or another HDR switch since it is two lanes of 50Gb/s. The ConnectX®-5 and EDR switches use different signaling of four lanes of 25Gb/s to achieve 100Gb/s.
  • The 1:2 splitter cable only works one way and only from the HDR200 switch port.
    1. The splitter cable can only split a HDR200 switch port into two HDR100 ports to connect to a HBA. It cannot be used in reverse with the 200Gb/s in the HBA split to two 100Gb/s ports upwards to the switch.
    2. In addition, it cannot be used in the ConnectX®-6 HBA adapter 200Gb/s port to split the port into two more HBA ports either. It only splits from the switch port to an HBA. This has to do with the MAC capabilities inside the chips. The switch chip is much bigger and has many more capabilities than the HBA chip, which is considerably smaller.


Active Optical Cables (AOCs) convert the port electrical signals into optical pulses that are sent down an optical fiber to the other end where it is converted back again. This has the advantage of extending a high-speed link reach from 3-meter limitation of DAC to up to 100 meters – although the average reach used is typically <40 meters due to installation issues of threading a transceiver end over a long distance in a congested data center!


AOCs do not use optical connectors, as in optical transceivers, making it an easy-to-use, plug & play cable solution. So, there is no blizzard of optical technology acronyms to learn, no complex interoperability issues to master, matching of components or optical connectors to clean and maintain. This makes a plug & play easy-to-use cabling solution that can span 0.5 meters to 100-meter reach. DAC has a reach to 3-meters and AOCs from 3-to-100 meters picks up from there.

Everything Designed & Manufactured by Mellanox

The AOC transceiver internal optics are based on the Mellanox-designed Silicon Photonics technologies and transceiver control ICs. By being vertically integrated for key components, it enables Mellanox to be first to market with the most advanced products. Even the Silicon Photonics laser is a Mellanox design and not the typical DFB laser.

Also, Mellanox manufactures its own DAC cables and AOCs in several locations around the world to maintain quality and high capacity to provide customers with the high unit volumes when they need it. In this way, we can maintain very high levels of quality and capacity and keep costs at the lowest levels.

Key Take-A-Ways

Mellanox LinkX DAC and AOC cables offer the lowest-cost copper and optical links with the lowest-power and latency in the industry. This enables the most cost efficient, highest ROI, lowest Capex and Opex interconnect solution.  The straight and splitter DACs, along with the extended reach AOCs, enable customers the ability to build a wide variety of configurations that meet every application need with up to 40-port of HDR200 between switches and subsystems or 80-ports of HDR100 to HBAs or any mixture of both up to 100-meters between components. Everything is manufactured by Mellanox and uses Mellanox-designed Silicon Photonics and control ICs.

Contact your Mellanox sales representative for availability and pricing options and stay tuned to my blog more interconnect news and tips.

More Information:

Mellanox InfiniBand & Ethernet AOCs & DAC cables and transceivers.

Mellanox HDR switches

Mellanox HDR ConnectX-6 host bus adapters


About Brad Smith

Brad is the Director of Marketing at Mellanox, based in Silicon Valley for the LinkX cables and transceivers business focusing on hyperscale, Web 2.0, enterprise, storage and telco markets. Recently, Brad was Product Line Manager for Intel’s Silicon Photonics group for CWDM4/CLR4 and QSFP28 product lines and ran the 100G CLR4 Alliance. Director of Marketing & BusDev at OpSIS MPW Silicon Photonics foundry. President/COO of LuxSonar Semiconductors ( Cirrus Logic) and co-founder & Director of Product Marketing of NexGen, a X86-compatible CPU company sold to AMD - now the X86 product line. Brad also has ~15 years in technology market research as Vice president of the Computer Systems group at Dataquest/Gartner; VP/Chief Analyst at RHK and Light Counting networking research firms. Brad started his career at Digital Equipment near Boston with the VAX 11/780 and has served as CEO, president/COO and on the board-of-directors of three start-up companies. Brad has a BSEE degree from the University of Massachusetts; MBA from University of Phoenix and holds 2 optical patents

Comments are closed.