All posts by admin

Using InfiniBand as a Unified Cluster and Storage Fabric

InfiniBand has been the superior interconnect technology for HPC since it was first introduced in 2001, leading with the highest bandwidth and lowest latency year after year.  Although, it was originally designed and ideal for inter-process communications, what many people may not realize is that InfiniBand brings advantages to nearly every use for an interconnect fabric technology to today’s modern data center.

First, let’s review what a fabric actually does in the context of a Beowulf architecture HPC cluster.  In addition to the inter-process communication already mentioned, compute nodes need access to shared services such as storage, network boot or imaging, internet access, and out of band management.  Traditionally, it was common in the design for HPC cluster to build one or more Ethernet networks in addition to InfiniBand for some of these services.

The primary use of a high performance fabric in any HPC cluster is for IPC (inter-process communication) with support for RDMA and higher level protocols such as MPI, SHMEM, and UPC.  Mellanox InfiniBand HCAs (host channel adapters) support RDMA with less than 1% CPU utilization and the switches in an InfiniBand Fabric can work in tandem with HCAs to offload nearly 70% of the MPI protocol stack to the fabric itself – actually enlisting the network as a new generation of co-processor.  And speaking of co-processors, newer capabilities such as GPUDirect and rCUDA extend many of these same benefits to attached GPGPUs and other coprocessor architectures.

The language of the internet is TCP/IP which also supported by an InfiniBand fabric using a protocol known as IPoIB.  Simply put, every InfiniBand HCA port represents a device to the kernel which can be assigned an IP address and fully utilize the same IPv4 and IPv6 network stacks as Ethernet devices.  Additionally, a protocol called Virtual Protocol Interconnect (VPI) allows any InfiniBand port to operate as an Ethernet port when connected to an Ethernet device and Mellanox manufactures “bridging” products that forward TCP/IP traffic from the IPoIB network to an attached Ethernet fabric for full internet connectivity.

Storage can also utilize the IP protocol, but parallel filesystems such as GPFS, Lustre, and other clustered filesystems also support RDMA as a data path for enhanced performance.  The ability to support both IP and RDMA on a single fabric makes InfiniBand an ideal way to access parallel storage for HPC workloads.  End-to-end data protection features and offloads of other storage related protocols such as NVME over fabrics (PCIe-connected solid state storage) and erasure coding further enhance the ability of InfiniBand to support and accelerate access to storage.

Mellanox ConnectX® InfiniBand adapters also support a feature known as FlexBoot.  FlexBoot enables remote boot over InfiniBand or Ethernet using Boot over InfiniBand, over Ethernet, or even Boot over iSCSI (Bo-iSCSI). Combined with VPI technologies, FlexBoot enables the flexibility to deploy servers with one adapter card into either InfiniBand or Ethernet networks with the ability to boot from LAN or remote storage targets. This technology is based on PXE (Preboot Execution Environment standard specification, and FlexBoot software is based on the open source iPXE project (see www.ipxe.org).

Hyperconverged datacenters, Web 2.0, Machine Learning, and non-traditional HPC practitioners are now taking note of the maturity and flexibility of InfiniBand and adopting it to realize accelerated performance and improved ROI from their infrastructures.  The advanced offload and reliability features offered by Mellanox InfiniBand adapters, switches, and even cables means that many workloads can realize greater productivity, acceleration and increased stability.  Our new InfiniBand router, which supports L3 addressing can even interconnect multiple fabrics with different topologies making InfiniBand able to scale to an almost limitless number of nodes.

InfiniBand is an open standard for computer interconnect, backwards and future compatible and supported by over 220 members of the InfiniBand Trade Association (IBTA).  Mellanox remains the industry leader and committed to advancing this technology generations ahead of our competitors with leading edge silicon products integrated onto our adapters (HCAs), switching devices, cables and more.  If you want the highest performance, lowest latency, best scaling fabric for all of your interconnect needs, consider converging on Mellanox InfiniBand.

Join me on Tuesday, March 14th at 10a.m. for our webinar, One Scalable Fabric for All: Using InfiniBand as a Unified Cluster and Storage Fabric with IBM.

Dell launches SwitchIB-2: One Step Closer to Exascale Computing

Dell will be once again demonstrating its HPC leadership with the release of Mellanox SwitchIB-2 featuring Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ technology. Mellanox and Dell have a long history delivering HPC solutions and are partners in many of the TOP500 supercomputers in the world. These supercomputers utilize InfiniBand infrastructure from Mellanox and PowerEdge servers from Dell to achieve the highest levels of performance and efficiency. For High-Performance-Computing applications, the performance of the compute cluster is highly dependent on internode communication.  Network latency is extremely important in the overall performance of distributed applications, because each node needs the results from all the other nodes before moving on to the next iteration of the calculation. Thus network latency directly contributes to the overall execution time and compute efficiency of the cluster.  Mellanox InfiniBand solutions lead the industry with the lowest latency and highest throughput switches and network interface adapters.

cable-with-LinkX-tab cable_100G_SilPh_AOC

CORE-Direct technology introduced by Mellanox, in partnership with Oak Ridge National Lab, was the first step toward taking a holistic system approach to improving performance by implementing the execution of collective communications in the network. SHARP technology is an extension of this technology, which moves support for collectives-based communication from the network edges, e.g. the hosts, to the core of the network – the switch fabric. This emerging class of intelligent interconnect devices, including Mellanox SwitchIB-2 EDR 100Gb/s InfiniBand Switching with SHARP, is now enabling a new generation of in-network co-processing, and more effective mapping of communication between devices in the system by increasing system performance by an order of magnitude. SB7800

SHARP greatly improves the performance of many scientific and engineering applications by reducing the time needed to perform collectives operations. Such operations would otherwise need to wait for data to arrive at the server nodes and consume many CPU clock cycles which limits the application’s ability to scale. SwitchIB-2 frees the CPU from performing these operations, allowing more CPU cycles to work on the application and greatly improves application scalability.

The most advanced compute clusters in the world depend on Mellanox to deliver the needed performance, efficiency and scalability of HPC performance. Only SwitchIB-2 is able to deliver the needed performance characteristics for scale-out infrastructures, bringing us one-step closer to Exascale computing.

Links:

ConnectX®-4 Single/Dual-Port Adapter supporting 100Gb/s with VPI

https://www.mellanox.com/page/products_dyn?product_family=225&mtag=sb7800

https://www.mellanox.com/page/products_dyn?product_family=201&mtag=connectx_4_vpi_card

CoreDirect

SHARP

Case study:

StonyBrook  

Videos:

SDSC

Cape Town HPC

 

2016 Grand Prix Havana: Yarden Gerbi Wins Gold

Mellanox extends congratulations to Yarden Gerbi for winning the Gold medal in the recent Grand Prix Havana 2016 competition in Cuba.  Gerbi defeated Maricet Espinosa in the final of the under-63kg competition.  Gerbi is currently training to compete in the Olympic games held this summer in Rio de Janeiro, Brazil.

 

Yarden Gerbi ShowImage

Photo Credit:  Jpost.com 

 

Yarden Gerbi 1990188-18

Photo Credit:  http://sports.walla.co.il

 

Continue reading

Dell releases FDR InfiniBand Switches from Mellanox

Today, Dell announced the release of Mellanox’s end-to-end FDR 56Gb/s InfiniBand Top of Rack (ToR) solutions with Dell.  Three switches will be available the Mellanox ToR SX6012 (12 port), SX6025 (36 port unmanaged) and SX6036 (36 port managed).  This was highlighted even further with Dell making Mellanox EDR 100Gb/s end-to-end InfiniBand Switches, Adapters and cables available through Dell S&P.  Customers now have unmatched interconnect available for the Dell PowerEdge Server Family that together will deliver unparalleled performance and scalability for high performance and data intensive applications.

 Mellanox InfiniBand FDR Switches

Mellanox FDR/EDR 56/100Gb/s InfiniBand adapters, switches, cables and software are the most efficient solutions for server and storage connectivity, delivering high throughput, low latency and industry-leading application performance for enterprises solutions that are widely deployed across HPC, Cloud, Web 2.0, and Big Data satisfying the most demanding data centers requirements.

 

The Dell PowerEdge Server Family has reinvented enterprise Data center solutions and data analytics by changing the equation of performance, space, power, economics and as a result delivering breakthrough performance at record setting efficiencies.

 

Together we are enabling customers to build highly efficient and scalable cluster solutions at a lower cost with less complexity, in less rack space.  In an effort to help customers realize the advantages of Dell Server and Storage platform designs combined with Mellanox high-performance interconnect, we are investing in upgrading the Dell Solution Centers in US, EMEA, & APJ with end-to-end InfiniBand technology,  enabling performance benchmarking and application level testing with the latest HPC technologies.

 

REFERENCES:

 

Mellanox and HP Collaborate Together with EDR 100Gb/s InfiniBand

Today, at ISC’15, we announced the growing industry-wide adoption of our end-to-end EDR 100Gb/s InfiniBand solutions.  This was highlighted even further with HP announcing End-to-End EDR 100Gb/s InfiniBand enablement plans across their Apollo Server Family that together deliver unparalleled performance and scalability for HPC and Big Data workloads.

 

We are thrilled to have HP’s HPC and Big Data team as a key technology partner enabling these verticals, the collaboration allows our companies to deliver optimized compute platforms with the most efficient & high-performance interconnect available on the market today.

 

Mellanox EDR 100Gb/s InfiniBand adapters, switches, cables and software are the most efficient solutions for server and storage connectivity, delivering high throughput, low latency and industry leading application performance for both HPC & Big Data applications.

 

The HP Apollo Server Family has reinvented High Performance Computing & Big Data analytics by changing the equation of performance, space and power, and as a result delivering breakthrough performance at record setting efficiencies.

 

Together we are enabling customers to build highly efficient and scalable cluster solutions at a lower cost, with simplicity, in less rack space.  In an effort to help customers realize the advantages of HP’s best-in-class  Server designs combined with Mellanox high-performance interconnect, we are investing in upgrading the HP competency & benchmarking centers in US, EMEA, & APJ with end-to-end EDR (100Gbs) InfiniBand technology & HP’s latest ProLiant Gen9 servers enabling performance benchmarking & application level testing with the latest HPC technologies.

 

Mellanox Selfie Photo Contest: Living the Mellanox Life!

We are excited to announce the Mellanox Selfie Photo Contest. This contest will run through the summer, so you have plenty of time to upload your photo and gather votes for your best entry.  The top prize for the contest is an American Express $5,000 USD gift card!  Additional prizes include a GoPro+ deluxe set or Canon digital camera.

 

Mellanox selfie photo contest 2015

 

We want to see YOU!  Show us through your phone lens how Mellanox is a part of your life. It’s easy! Follow the steps below: Continue reading

Global Corporate Challenge: Getting Started with Fitness

Several weeks ago, I received an email encouraging Mellanox employees in the United States to join the Global Corporate Challenge (GCC).  I almost filed the email in the “Other stuff from people” folder, by way of sheer coincidence, has a little trash can icon next to it.

 

But, then I paused.

 

I really should move more.  I can admit it.  I just don’t move as much as I should.  A few (okay, more than a few) years of working in the marketing field has caused me to adopt a sedentary lifestyle that can’t be good for my health.  Even two-times-motherhood hasn’t pushed me off the desk chair.

Continue reading

Optimized Server Networking for High Performance Infrastructures

Customers today are seeking low latency operations to enable high performance cloud computing, big data, database and virtualization applications.  To meet this demand, Mellanox has collaborated with HP to optimize the HP ProLiant Server networking for high performance infrastructures.

LJ Miller 040815 HP Ethernet 10Gb 546FLR-SFP+ 2-port adapter

HP Ethernet 10Gb 2-port 546FLR-SFP+ Adapter

HP recently announced two new adapters–the first in the HP Ethernet 
adapter family–based on the Mellanox ConnectX®-3 Pro 10GbE.  These adapters are optimized for fast, efficient and scalable cloud and Network Functions Virtualization (NFV). The HP Ethernet 10Gb 2-port 546FLR-SFP+ and 546SFP+ Stand-up adapters for ProLiant Gen9 rack servers are specifically designed to optimize cloud efficiency, improve performance and security of applications.

 

LJ Miller 040815 HP Ethernet 10Gb 546SFP+ 2-port adapter

HP Ethernet 10Gb 2-port 546SFP+ Adapter

Continue reading

Turbo LAMP Stack for Today’s Demanding Application Workload

With the rise of cloud computing and mobile technologies, today’s market demands applications that deliver information from mounds of data to a myriad of end user devices. This data must be personalized, localized, and curated for the user and sent back to these devices. Businesses must retrieve data from their own systems—typically ERP, SCM and HRM applications–and then deliver it through systems of engagement with those end users.

 

The standard for building these systems is the LAMP stack, which consists of Linux as the operating system, an Apache web server, an open source relational database like MySQL or MariaDB, and PHP as the development language.

 

LAMP stack has become popular because each component can be theoretically interchanged and adapted without lock in to a specific vendor software stack.  These solutions have grown to support many business critical systems of engagement, despite the need for more powerful, scalable and reliable hardware systems. Ideally, the LAMP stack can be optimized for dynamic scale out as well as scale up virtualized infrastructures.

Continue reading

Updated! Vote for #OpenStack Summit Vancouver Presentations

The OpenStack Summit will be held May 18-22, 2015 in Vancouver, Canada. The OpenStack Foundation allows its member community to vote for the presentations they are most interested in viewing for the summit. Many presentations have been submitted for this event, and voting is now open.  We have updated this post with additional sessions submitted by Mellanox and our partner organizations.

In order to vote, you will need to register to the OpenStack Foundation: https://www.openstack.org/join/register/.   Voting for all presentations closes on Monday, February 23 at 5:00 PM CST (GMT-6:00).

 

Vote! #OpenStack Summit

 

For your reference, we have included a list of Mellanox sessions below, click on the title to submit your vote:

 

Accelerating Applications to Cloud using OpenStack-based Hyper-convergence  

Presenters: Kevin Deierling (@TechSeerKD) &  John Kim (@Tier1Storage)

Continue reading