All posts by Gilad Shainer

About Gilad Shainer

Gilad Shainer has served as Mellanox's Vice President of Marketing since March 2013. Previously, he was Mellanox's Vice President of Marketing Development from March 2012 to March 2013. Gilad joined Mellanox in 2001 as a design engineer and later served in senior marketing management roles between July 2005 and February 2012. He holds several patents in the field of high-speed networking and contributed to the PCI-SIG PCI-X and PCIe specifications. Gilad holds a MSc degree (2001, Cum Laude) and a BSc degree (1998, Cum Laude) in Electrical Engineering from the Technion Institute of Technology in Israel.

Mellanox applies for the award of German IT-Security Certificate based on Common Criteria for its Mellanox Innova IPsec encryption network adapter

Mellanox Technologies, a leading manufacturer of intelligent Ethernet communication technology for Data Centers, Servers and Hyper Converged Infrastructures announces, that it will apply for the award of German IT-Security Certificate according to Common Criteria for it’s Mellanox Innova IPsec EN network adapter card by the Federal Office for Information Security (BSI).

The Innova IPsec EN adapter card provides security acceleration for IPsec-enabled networks while taking advantages of it’s underlying ConnectX-4 Lx network adapter technology. The adapter offloads the processing of IPsec algorithms, frees up the CPU and eases network bottlenecks. Mellanox Innova IPsec network adapter integrates advanced network capabilities and crypto offloading in one card, utilizing only a single PCIe slot for both networking and crypto functions.

German security technology specialist secunet Security Networks AG will support the preparation and execution of the evaluation process. Mellanox and secunet cooperate  since several years successfully in the development of Open-Source based IT-Security solutions, particularly for the encryption of network communication and the protection of Cloud infrastructures.

The evaluation will most likely be performed by SRC Security Research & Consulting GmbH. SRC is an experienced consulting company and evaluation agency, certified by the German BSI to perform the evaluation of products and solutions with high security requirements.

InfiniBand Accelerates World’s Top HPC and Artificial Intelligence Supercomputer at Oak Ridge National Laboratory

Summit, The New 200 Petaflops System, Leverages EDR InfiniBand In-Network Computing Technology to Deliver Unprecedented Computing Power for Scientific Simulation and Artificial Intelligence Applications

Mellanox EDR InfiniBand accelerates the new world’s top high-performance computing (HPC) and Artificial Intelligence (AI) system, named Summit, at the Oak Ridge National Laboratory.  Summit delivers 200 Petaflop performance and leverages dual EDR InfiniBand network to provide overall 200 gigabit per second throughput to each compute server.

Summit, a world’s top supercomputer, provides eight times higher performance versus the previous fastest supercomputer in the US. Summit will enable simulations of greater resolution and higher fidelity to advance human knowledge in diverse science domains such as biology, nuclear science, cosmology and more. Some early Summit research projects include studying exploding stars at unprecedented scales, simulating particle turbulence in sustainable fusion reactions, researching materials for high-temperature superconductors and others.

Summit is also the fastest Artificial Intelligence platform in the world, offering unparalleled opportunities for the integration of AI and scientific discovery. Applying machine learning and deep learning capabilities to automate, accelerate, and drive understanding at supercomputer scales will help scientists achieve breakthroughs in human health, energy and engineering, and answer fundamental questions about the universe.

“We are proud to accelerate the world’s top HPC and AI supercomputer at the Oak Ridge National Laboratory, a result of a great collaboration over the last few years between Oak Ridge National Laboratory, IBM, NVIDIA and us,” said Eyal Waldman, president and CEO of Mellanox Technologies. “Our InfiniBand smart accelerations and offload technology delivers highest HPC and AI applications performance, scalability, and robustness. InfiniBand enables organizations to maximize their data center return-on-investment and improve their total cost of ownership and, as such, it connects many of the top HPC and AI infrastructures around the world. We look forward to be part and to accelerate new scientific discoveries and advances in AI development, to be performed and enabled by Summit.”

The need to analyze growing amounts of data, to support complex simulations, to overcome performance bottlenecks and to create intelligent data algorithms requires the ability to manage and carry out computational operations on the data as it is being transferred by the data center interconnect. Mellanox InfiniBand solutions incorporate the In-Network Computing technology that performs data algorithms within the network devices, delivering ten times higher performance, and enabling the era of “data-centric” data centers.

“Summit HPC and AI-optimized infrastructure enables us to analyze massive amounts of data to better understand world phenomena, to enable new discoveries and to create advanced AI software,” said Buddy Bland, Program Director at Oak Ridge Leadership Computing Faculty. “InfiniBand In-Network Computing technology is a critical new technology that helps Summit achieve our scientific and research goals. We are excited to see the fruits of our collaboration with Mellanox over the last several years through the development of the In-Network Computing technology, and look forward to take advantage of it for achieving highest performance and efficiency for our applications.”

For more information on Summit, please check https://www.olcf.ornl.gov/summit/.

 

UoA and Mellanox Study the Solar System’s Lord of The Rings Saturn

Humans were obsessed with the stars and exploration long before the written word. From Stonehenge and the Mayan calendar, to American mission commander Neil Armstrong and pilot Buzz Aldrin who manned the lunar module Eagle on July 20, 1969, humans have been reaching, literally, for the stars and the planets beyond. The quest for “One giant leap for mankind” is seemingly never ending.

In a major space policy speech at Kennedy Space Center on April 15, 2010, now former U.S. President Barack Obama predicted a manned Mars mission to orbit the planet by the mid-2030s, followed by a landing. We’ve already traveled far beyond our blue planet, photographed in detail the shocking landscape of Pluto, skirted the giant gas planet Jupiter it all its red glory and now it’s time to put down some roots. Humankind is determined to take our Manifest Destiny beyond this mortal coil and colonize an alien planet. Next stop, Mars.

However, all the glamour and prose aside, you don’t need to be an astrophysicist to know that space travel is exceedingly dangerous. For example, cosmic rays, radiation, microgravity, high-speed micrometeorites … just to name a few life-ending conditions space pioneers will face every day in mankind’s quest to colonize Mars.

Peter Delamere, Professor of Space Physics at the University of Alaska Fairbanks’ Geophysical Institute, knows a lot about the weather. Space weather, to be precise. Because space weather impacts many aspects of our near-Earth space environment, it also poses a potential risk to Earth-orbiting satellites, transpolar flights, and, of course, human space exploration. Thus, comparative studies of planetary space environments are crucial for understanding the basic physics that determine space weather conditions. One of the most dramatic manifestations of space weather can be found in the aurorae, or as most of us know it, the aurora borealis or aurora australis.

 

Turns out, we already know that Earth, Jupiter and Saturn all have aurora lights in their respective polar regions. It’s just that the space weather that creates these lights is fundamentally different. Studies show that Saturn’s aurora may be driven internally by Saturn’s rapid rotation rather than by the solar wind, as is the case on Earth.  Ultimately, space weather research strives to make accurate predictions that will help mitigate risks to ongoing space activity and human exploration.

Illustration of the magnetic field topology and flux circulation at Saturn. Flows are shown with red arrows. Magnetic fields are shown in purple (mapping to outer magnetosphere) and blue (showing bend back and bend forward configurations). From Delamere et al. 2015.

The figures above show results from a three dimensional simulation of the Kelvin-Helmholtz instability (counter-streaming flows that generate vortices) at Saturn’s magnetopause boundary. This is the boundary that mediates the solar wind interaction with Saturn’s magnetosphere.  The complicated surface waves mix solar wind and magnetospheric plasma, causing a, “viscous-like” interaction with the solar wind. Similar processes happen on Earth, but are highly exaggerated on Saturn and Jupiter. The lines are magnetic field lines.

Innovation in the field of Space Plasma Physics, which is driving our collective understanding of space weather and its potential impact, is highly dependent upon access to HPC resources. Numerical simulation requires vast spatial domains inherent in a space plasma environment. So, having access to local reliable HPC resources, such as Mellanox HPC solutions, enables the Computational Space Physics group at the Geophysical Institute to further this important research. The Delamere group, which is part of the Computational Space Physics group at the Geophysical Institute, is currently funded by numerous NASA projects amounting to over $2M, all of which require considerable HPC resources.

When Congress established the Geophysical Institute in 1946, they could not have possibly predicted the depth and impact of the research that would be conducted and the work that would be done there. From space physics and aeronomy; atmospheric sciences; snow, ice, and permafrost; seismology; volcanology; remote sensing; and tectonics and sedimentation, the institute continues to make discoveries and innovations that are changing the world for the better.

In January 2017, with support from the M. J. Murdock Charitable Trust, the Geophysical Institute, UAF vice chancellor of research, UAF International Arctic Research Center, and UAF IDeA Network of Biomedical Research Excellence, UAF Research Computing Systems engineers deployed Mellanox InfiniBand solutions across multiple racks to form their HPC system. We knew something of the work being done at the Geophysical Institute at that time but even we at Mellanox didn’t yet understand the full impact of their research. From deep within the earth, to the far reaches of our solar system, Mellanox’s leadership in HPC solutions is helping to solve some of science’s toughest challenges. The final blog in this series will come full circle and focus on the long-term data and research driven by Uma S. Bhatt, Professor of Atmospheric Sciences at the Geophysical Institute; and the efforts underway to study the climate in the most inhospitable and inaccessible region of our planet, the Arctic.

Supporting Resources:

 

 

 

 

 

 

 

 

Our Interconnected Planet: The University of Alaska and Mellanox take HPC Climate Research by Storm

“How’s the weather?” is probably the most oft uttered question in the history of mankind. And with the recent epic hurricane devastating Houston, Texas, the weather is literally on everyone’s minds these days.

Weather has been the bane of humans for as long as we have been around. Everyone thinks that the Inuik have the most words for snow (between 40-50 depending on how you count) but in reality, it is the Scots who claim the most snow-related words, 421 to be precise. Who knew? Flindrikin means a light snow shower, at least if you are in Scotland where people apparently take their weather seriously. Weather is also a serious topic for the researchers tackling Arctic climate at the University of Alaska.

Uma S. Bhatt, Professor of Atmospheric Sciences, Geophysical Institute, University of Alaska, probably knows more words for snow than most. She is seeking a better understanding of the Arctic earth system with respect to the need for long-term climate information (e.g., air temperature, precipitation, wind speeds). The challenge is, most of these data (e.g., atmospheric re-analyses, climate models) are available at spatial resolutions on the order of 100s of kilometers, which is not nearly at a high enough resolution needed to support process studies and to assess local impacts. To address this need, high resolution climate information has been created at a 20-km resolution through a process called dynamical downscaling.

European Center Reanalysis (ERA-Interim) daily average temperature for 4 July 1979 (left) and dynamically downscaled maximum temperature (Tmax) from the Weather Research Forecast (WRF) model at ~20km resolution. Units are ˚C.

 

Downscaling is particularly successful in improving climate information from lower resolution models in areas of complex topography by producing more realistic precipitation and temperature gradients. Capturing the local temperature variations is only possible through downscaling of climate information. This downscaling activity at the University of Alaska is supported by the Alaska Climate Science Center through the Department of Interior, and is possible only because of the locally available Mellanox HPC computing resources. A key advantage of dynamical downscaling is that a full suite of atmospheric model variables are available, which provides a rich data source for understanding the underlying mechanisms of Arctic climate change. Variables at the surface include precipitation, snow water equivalent, soil moisture and temperature, solar radiation, terrestrial radiation, and sensible latent heat fluxes. Variables at multiple levels in the atmosphere include temperature, moisture, winds, and geopotential height. No wonder your local weather person gets it wrong so often. Investigations of these data will help advance the world’s understanding of climate drivers of various parts of the Earth system. Beyond scientific endeavors, the research team is generously making available this downscaled data to glaciologists, hydrologists, ocean wave modelers, wildlife biologists and others for use in other scientific investigations. These collaborations help everyone better understand data as additional scientists are evaluating this data in the context of their part of the Earth system. This rich data set is also being used to ask questions about glacier mass balance in southern Alaska, rain-on-snow events relevant for caribou mortality, wildland fire susceptibility, and numerous other topics relevant for Alaska and other parts of the world.

According to Ms. Bhatt, the computational demands for dynamical downscaling are quite daunting. Just the data storage requirements can be 3.3 TB (that’s terabyte) and that’s just for one year of raw model output and which reduces to about 300 GB of post-processed data when only the most used variables are extracted and saved as daily values.

So, just much is 1 terabyte these days? Assuming that the average size photo is 500K, then a 1TB hard drive would hold some 2 million photos.

Augmenting HPC resources at UAF in January 2017 by adding Mellanox InfiniBand solutions across multiple racks to form their HPC system, has allowed the team the chance to downscale additional models and different climate scenarios in order to reduce the uncertainty in future projections for Alaska. And sharing this valuable data space with other researchers is key to spirit and generosity of the University of Alaska and their mission to innovate in all areas of research; physics and aeronomy; atmospheric sciences; snow, ice, and permafrost; seismology; volcanology; remote sensing; and tectonics and sedimentation. Along with the University of Alaska, Mellanox is proud to be part of this journey, to be helping with this quest for knowledge and a deeper understanding of our planet and the universe beyond.

Supporting Resources:

 

 

The University of Alaska Fairbanks and Mellanox’s HPC Take on Earthquakes

Dr. Carl Tape is an associate professor of geophysics at UAF at the Geophysical Institute and the Department of Geosciences. He is conducting research on seismic tomography and seismic wave propagation. Seismic tomography is a technique for imaging the subsurface of the Earth with seismic waves produced by earthquakes or explosions. Seismic waves travel through the Earth’s layers, and originate from earthquakes, volcanic eruptions, magma movement, large landslides or large man-made explosions that give out low-frequency acoustic energy. Dr. Tape is leading research efforts in Alaska with the goal of developing a 3D model of the subsurface crust and upper mantle.

This might not seem like something the average person might be interested in but consider Alaska is one of the most seismically active regions in the world and by far the biggest earthquake producer in the United States, with an average of six magnitude 6 to 7 earthquakes per year, and about 1,000 earthquakes in Alaska each month. Dr. Tape’s work holds the promise of understanding and reducing seismic hazards, with Mellanox’s InfiniBand leading the way in their Data Center.

 

This image shows earthquakes with a moment magnitude of greater than 4 from 1990-2010. Moment magnitude measures the size of events in terms of how much energy is released.

In January 2017, with support from the M. J. Murdock Charitable Trust, the Geophysical Institute, UAF vice chancellor of research, UAF International Arctic Research Center, and UAF IDeA Network of Biomedical Research Excellence, the Geophysical Institute’s Research Computing Systems engineers deployed 11 Mellanox EDR switches and 38 compute nodes distributed across six racks.  This deployment brought Chinook to a total core count of 1892 cores and enhanced the cluster bandwidth from QDR speeds (40 Gb/s) to FDR/EDR speeds (56/100 Gb/s). Chinook now has enough rack space and InfiniBand infrastructure in place to expand to over 4,000 cores, if research demand warrants.

Now, what else are they doing at UAF with all that data center power? Well, one of the main missions of the the mission of the Geophysical Institute is to understand basic geophysical processes governing the planet Earth, especially as they occur in or are relevant to Alaska. With a motto of “Drive the Change,” UAF as a whole is focused on driving positive change for their state. The university is working to build a better future by educating a skilled workforce of teachers, engineers and nurses.

With respect to the environment, Earth to the surface of the sun and beyond, the institute turns data and observations into information useful for state and national needs. An act of Congress established the Geophysical Institute in 1946. Since that time, the institute has earned an international reputation for studying Earth and its physical environments at high latitudes. The institute now consists of seven major research units and many research support facilities. The research units include space physics and aeronomy; atmospheric sciences; snow, ice, and permafrost; seismology; volcanology; remote sensing; and tectonics and sedimentation. Our Interconnected Planet theme will focus the next blog on the work of Professor David Newman, and his research on power grids.

 

Supporting Resources:

 

 

 

 

 

 

Our Interconnected Planet: The University of Alaska Fairbanks Tackles The Unsettling Subject of Turbulence

I have long been convinced that our mutually bumpy airplane rides, all caused by turbulence, is simply getting worse. And I’m not the only one who thinks we are white knuckling it a whole lot more often these days. That is why I so appreciate the work being done by Dr. David Newman, Professor of Physics Department and part of the Geophysical Institute at the University of Alaska Fairbanks. He is leading research on Modeling and Understanding Turbulence and Turbulent Transport, which is music to my ears. If travelers such as myself have to keep reaching for air sickness bags, we would sure like to more about know why.

According to a study by Paul Williams  an atmospheric scientist who conducted a 2013 study, and duly reported by the Huffington Post, it is going to take years of research before turbulence can be definitively linked to something like global warming. In fact, he argues that while turbulence seems to be increasing in frequency right now, the likely culprit is actually social media because so many travelers are taking videos of turbulence and sharing.

I would say, tell this to my lurching stomach, but I digress. The work being done by Dr. Newman is tackling head-on one of computational science’s thorniest problems. In fact, he has developed a brand new High Performance Computing technique for solving turbulence problems called the ‘Parareal’ method, a method gaining popularity and use. Meanwhile, the main thrust of his research is to characterize the nature of, and quantify the mechanisms behind, turbulent transport. Much of the funding for this work comes from the U.S. Department of Energy’s Office of Fusion Energy for modeling and understanding turbulence in the confined plasmas needed to make fusion work as an energy source on Earth. However, much of the research is also directly applicable to Earth’s geophysical systems such as the oceans and the atmosphere. Dr. Newman says the research would not be possible without access to HPC. Insights gained from this research should thus be applicable beyond the plasma fusion context (for example, in ozone transport across atmospheric jets in the polar regions of earth). So, not only will the research help benefit an alternative energy source, fusion, but perhaps ultimately help to figure out why so many of us cannot get on a plane without feeling like we have just been strapped in for a six-hour roller coaster ride.

Again, none of this would be possible without HPC. To recap, back in January 2017, with support from the M. J. Murdock Charitable Trust, the Geophysical Institute, UAF vice chancellor of research, UAF International Arctic Research Center, and UAF IDeA Network of Biomedical Research Excellence, the Geophysical Institute’s Research Computing Systems engineers deployed Mellanox InfiniBand solutions across multiple racks to form their HPC system. The cluster, named Chinook, was made possible by a partnership between UAF and the Murdock Charitable Trust (http://murdocktrust.org/).

Figure A.1 The right panel shows turbulence in a shear flow while the left panel shows the same turbulence without a sheared flow. A shear is a change in the right to left velocity of the fluid shown in these figures. A sheared flow leads to a reduction in cross flow scale lengths and therefore a reduction in cross flow transport. This is an important topic for transport of many constituent quantities such as pollution, salinity, nutrients, temperature etc.

 

With research on-going at UAF’s Geophysical Institute in disciplines ranging from space physics and aeronomy; atmospheric sciences; snow, ice, and permafrost; seismology; volcanology; and remote sensing to tectonics and sedimentation, HPC is making a difference in advancing our understanding of how our Interconnected Planet works.

Supporting Resources:

 

 

 

Our Interconnected Planet: The University of Alaska Fairbanks and Mellanox’s High Performance Computing Tracking Earth’s Most Massive Ice Shelves

Note: Recently, a chunk of ice the size of Delaware broke off Antarctica’s Larsen C ice shelf (news). With the help of Mellanox’s High Performance Computing (HPC) solutions, the University of Alaska is conducting fascinating work on ice sheets.

Those concerned with what is popularly referred to as, “the fate of humanity” are apt to track comets, asteroids and even the rate of melt of one of earth’s most precious resources, ice sheets. But, as I set out to discuss Mellanox and the University of Alaska Fairbank’s (UAF) fascinating work on ice sheets, the thought came to me that high-tech data centers, and High Performance Computing (HPC) running some of the biggest financial trading centers in the world, were about as far from the vast frozen wild of ice sheets as it gets.

Turns out, the more I looked into it, the more I realized that HPC – and the expansive, pristine ice sheets of our blue planet – are actually very closely aligned. Presently, 10 percent of land area on Earth is covered with glacial ice, including glaciers, ice caps, and the ice sheets of Greenland and Antarctica. This amounts to more than 15 million square kilometers (5.8 million square miles). And it is worthy to note that glaciers store about 75 percent of the world’s fresh water. So, tracking the size and rate of melt for these massive ice sheets is actually very important to humanity. In fact, the Greenland ice sheet occupies about 82 percent of the surface of Greenland, and if melted, would cause sea levels to rise by 7.2 meters. Estimated changes in the mass of Greenland’s ice sheet suggest it is melting at a rate of about 239 cubic kilometers (57.3 cubic miles) per year. Fate of humanity indeed.

Andy Aschwanden, Research Assistant Professor, Geophysical Institute, UAF, is studying the ice flow of the Greenland ice sheet and is on the fast track for credible modeling efforts that will help predict the future evolution of the Greenland ice sheet.

Over the past two decades, the professor reports that large changes in flow have been observed in outlet glaciers, and that the melt rate is speeding up, with a 17 percent increase in ice-sheet wide melt between 2000 and 2005. To track these potentially life altering changes in these outlet glaciers, an ice sheet model is called for. UAF researchers have been hard at work developing the open-source Parallel Ice Sheet Model since 2006, and are genuine pioneers in open-source ice sheet modeling. Development, testing, and cutting edge research on ice sheets, says Aschwanden, goes hand-in-hand with state-of-the-art HPC resources.

Essentially, the simulations needed to track the massive ice sheet’s progress require extremely large resolutions, large outputs and high computational demands – all only available via the formidable computational power found in Mellanox’s HPC computing resources. Such simulations are needed as proof-of-concept; and the HPC resources provided by Mellanox enables routine simulations at ≤1km grid resolution that better resolve the physics of the Greenland ice sheet.

 

Figure A.4 (A) Observed Greenland surface speeds; box indicates the location of the insets. (B–
E) Basal topography at 150m and degraded ice sheet model resolutions. (F) Observed surface speeds at 150 m. (G–J) surface speeds modeled with PISM (adapted from Aschwanden, Fahnestock, and Truffer (2016)

 

As we know from research being conducted at the University of Alaska Fairbanks, an act of Congress established the Geophysical Institute in 1946. Since that time, the institute has earned an international reputation for studying Earth and its physical environments at high latitudes. The institute now consists of seven major research units and many research support facilities, including space physics and aeronomy; atmospheric sciences; snow, ice, and permafrost; seismology; volcanology; remote sensing; and tectonics and sedimentation.

In January 2017, with support from the M. J. Murdock Charitable Trust, the Geophysical Institute, UAF vice chancellor of research, UAF International Arctic Research Center, and UAF IDeA Network of Biomedical Research Excellence, UAF Research Computing Systems engineers deployed Mellanox InfiniBand solutions across multiple racks to form their HPC system. “This community, condo model project launches a significant change in how high-performance computing resources are offered to the UA community,” said Gwendolyn Bryson, manager of Research Computing Systems at the UAF Geophysical Institute. “Chinook is more capable, more flexible and more efficient than our legacy HPC resources.”

Our next blog will take us from the vast frozen reaches of Earth’s ice sheets discussed in this blog to the skies above where we look into The University of Alaska’s research on turbulence.

 

Supporting Resources:

Deep Learning in the Cloud Accelerated by Mellanox

At the GTC’17 conference week, NVIDIA and Microsoft announced new Deep Learning solutions for the cloud. Artificial Intelligence and deep learning have the power to pull meaningful insights from the data we collect, in real time, enabling business to gain a competitive advantage, and to develop new products faster and better. As Jim McHugh, vice president and general manager at NVIDIA said, having AI and Deep leaning solutions in the cloud simplifies the access to the required technology and can unleash AI developers to build a smarter world

Microsoft announced that Azure customers using Microsoft’s GPU-assisted virtual machines for their AI applications will have newer, faster-performing options. Corey Sanders, director of Compute at Microsoft Azure mentioned that the cloud offering will provide over 2x the performance over the previous generation, for AI workloads utilizing CNTK [Microsoft Cognitive Toolkit], TensorFlow, Caffe, and other frameworks.

Mellanox solutions enable to speed up data insights with scalable deep learning in the cloud. Microsoft Azure NC is a massively scalable and highly accessible GPU computing platform. Customers can use GPU Direct RDMA (Remote Direct Memory Access) over InfiniBand for scaling jobs across multiple instances. Scaling out to 10s, 100s, or even 1,000s of GPUs across hundreds of nodes allows customers to submit tightly coupled jobs like Microsoft Cognitive Toolkit. A tool perfect for natural language processing, image recognition, and object detection.

See more at: http://www.nvidia.com/object/gpu-accelerated-microsoft-azure.html#sthash.y71Fw3Jf.dpuf

 

Special Effects Winners Need Winning Interconnect Solutions!

Mellanox is proud to enable the Moving Picture Company (MPC, http://www.moving-picture.com/) with our world-leading Ethernet and InfiniBand solutions, that are being used during the creative process for Oscar winning special effects.

The post-production and editing phases require very high data throughput due to the need for higher resolution (the number of pixels that make up the screen), better color-depth (how many bits are used to represent each color) and the increase in frame rate (how many frames are played in a second). Data must be edited in real-time, and must be edited uncompressed to avoid quality degradation.

You can do the simple math: a single stream of uncompressed 4K video requires 4096 x 2160 (typical 4K/UHD pixel count) x 24 (color depth bits) x 60 (frames per second) which is 12.7Gb/s. Therefore one needs interconnect speeds of greater than 10G today. As we move to 8K videos, we will need data speeds greater than 100Gb/s! Mellanox is the solutions provider for such speeds and the enabler behind the great movies of today.

We would like to congratulate MPC for winning the British Academy of Film and Television (BAFTA) Special Visual Effects 2017 award! And for winning the 2017 visual effect Oscar! Congratulation from the entire Mellanox team.

Find out more about the creation of the Jungle Book effect:
Telegraph 

Mellanox Joins OpenCAPI and GenZ Consortiums, and Continues to Drive CCIX Consortium Specification Development

This week, Mellanox was part of three press releases that announced the formation of two new standardization consortiums – OpenCAPI and GenZ, as well as progress update by the CCIX (Cache Coherent Interconnect for Accelerators) consortium.

These new open standards demonstrates an industry wide collaborative effort and the needs to enable open, flexible and standard solutions for the future data center. The three consortiums are dedicated to delivering technology enhancements that increase the data center applications performance, efficiency and scalability, in particular for data intensive applications, machine learning, high performance computing, cloud, web2.0 and more.

Mellanox is delighted to be part of all three consortium, to be able to leverage the new standards in future products, and to enhance Mellanox Open Ethernet and InfiniBand solutions, enabling better communications between interconnect: the CPU/memory/accelerators.

There are many common goals between the different consortiums: to increase the CPU/Memory/Accelerator – interconnect bandwidth and reduce latency; to enable data coherency between the platform devices and more. While each consortium differs in the specific area of focus, they all drive the need for open standards and the ability to leverage existing technologies.

The CCIX consortium has tripled its members and is getting close to releasing its specification. The CCIX specification enables enhanced performance and capabilities over PCIe Gen4, leveraging the PCIe eco system to enhance future compute and storage platforms.

As a board member of CCIX and OpenCAPI, and a member of GenZ, Mellanox is determined to help drive the creation of open standards. We believe that open standards and the open collaborations between companies and users form the foundation for developing the necessary technology for the next generation cloud, Web 2.0, high performance, machine learning, big data, storage and other infrastructures.