All posts by Scot Schultz

About Scot Schultz

Scot Schultz is a HPC technology specialist with broad knowledge in operating systems, high speed interconnects and processor technologies. Joining the Mellanox team in March 2013 as Director of HPC and Technical Computing, Schultz is a 25-year veteran of the computing industry. Prior to joining Mellanox, he spent the past 17 years at AMD in various engineering and leadership roles, most recently in strategic HPC technology ecosystem enablement. Scot was also instrumental with the growth and development of the Open Fabrics Alliance as co-chair of the board of directors. Scot currently maintains his role as Director of Educational Outreach, founding member of the HPC Advisory Council and of various other industry organizations. Follow him on Twitter: @ScotSchultz

NVIDIA DGX SuperPOD features Mellanox EDR 100Gb/s on the 2019 Top 500 Supercomputers

World-Record Setting Performance Achieved Through Simplicity

While Mellanox continues its leadership as the standard interconnect provider for the most powerful research and commercial based supercomputers around the world, imagine deploying a world-record setting supercomputer using standard components in just a few weeks. In the June 2019 TOP500 list of the world’s most powerful supercomputers, NVIDIA’s latest supercomputer is based on the DGX-2H and ranked number 22 on the list. The NVIDIA DGX SuperPOD integrates 96 nodes of the most popular NVIDIA accelerated compute architecture tightly coupled with the world’s best-in-class Mellanox network fabric.

The NVIDIA DGX-2 is the world’s first 2.1 petaFLOPS system, powered by 16 of the world’s most advanced GPUs, accelerating the newest deep learning model types that were previously untrainable. In order to maintain the amount of data ingest and enable highest GPU efficiency, each system includes ten Mellanox EDR 100Gb/s HCAs for a total of 1 Terabit of data throughput!

For modern-day workloads such as deep learning, Mellanox InfiniBand provides the highest performance and scalability. Both Mellanox’s EDR 100Gb/s and the latest HDR 200Gb/s InfiniBand solutions provide in-network computing capability, native RDMA, GPUDirect and numerous other acceleration engines and storage offloads. The latest “Scalable Hierarchical Aggregation and Reduction Protocol” (SHARP) engines enhance acceleration for both deep learning applications and HPC workloads.

We are proud to be part of the NVIDIA DGX SuperPOD, a supercomputer that sets the standard for the most demanding workloads.

Read the NVIDIA Blog:

https://blogs.nvidia.com/blog/2019/06/17/dgx-superpod-top500-autonomous-vehicles/

 

 

Artificial Intelligence: The Future of Romance is Anything but Artificial

It is a holiday that actually harks back to Pagan, Roman and Christian origins, all of which seem to involve someone being martyred though probably not a good excuse for forgetting to bring your partner flowers and candy. Despite this somewhat dark and murky beginning, Valentine’s Day is a tradition that is now celebrated by exchanging flowers, candy, cards and other tokens in the United States, Canada, Mexico, the United Kingdom, France and Australia.

In Great Britain, Valentine’s Day began to be celebrated in the 17th century and by the mid-18th century, it was commonplace for friends and lovers of all social classes to exchange small tokens of affection or handwritten notes. By 1900, printed cards began to replace written letters due to improvements in printing technology and now approximately 150 million cards are exchanged annually. Last year, total spending for the holiday topped $18.2 billion, an average of $136.57 per person. Romance, it turns out, is big business. Global revenues from on-line dating is expected to top more than $1.3B in 2018.

When it comes to the pursuit of love, technology has made a far more advancements than just flowery cards and printing. In just one short decade, cultural perception of on-line dating has gone from a somewhat negative belief that only desperate people peruse on-line dating sites to the majority of Americans holding a far more moderate sentiment, now saying that online dating is a good way to meet people. In addition, online dating has increased among adults under age 25 as well as those in their late 50s and early 60s. 

The most successful on-line dating services use detailed profiles, proprietary matching algorithms and a closely managed communications process to help the lovelorn find Mr. or Ms. Right. Under the hood, on-line dating sites are a combination of data mining, Big Data, and AI which brews up a mix of business intelligence, psychological profiling, matching algorithms and a variety of communications technologies.

.

As mentioned in past blogs, AI is not limited to traditional data on a spread sheet. It can interpret and aggregate demographic and geo-spatial data. AI is able cross reference data, find commonalities and draw insights that were previously impossible due to data silos or the sheer amount of time it would take for a human to crunch the numbers. It can also consider seemingly unrelated or outside factors that mate seekers may not immediately see as relevant. For example, on-line dating sites that plumb your social media would pick up on what you like and post about. Therefore, if you post vacations pics from only tropical islands and follow a variety of wine blogs, AI could deduce that your chances for compatibility probably don’t necessarily lie with potential mates who are ardent teetotalers and like to trek the Alps. This ability to rapidly analyze data, and potential correlations, creates a more comprehensive matching system, one with higher odds than perhaps your cousin who has been trying to set you up with her neighbor for months now.

AI is already going well beyond the average dating site:

  • Rumor has it that financial institutions may routinely use AI to predict the stability of relationships in couples applying for loans to reduce the risk of defaults with a 90 percent accuracy rate.
  • If you worry that your relationship might not be able to succeed, Artificial intelligence can now take a guess at whether you and your partner actually can go the distance. An AI firm called DataRobot has built a tool based on Stanford University data that asks you six questions about your relationship, and predicts your chances of staying together for the next couple of years.
  • Coca-Cola uses AI to track product mentions across social media (about 35 million Twitter followers as an example), so the soft drink giant can see where their buyers are, who they are, and what drives their positive or negative experiences. AI opinion mining can now collect data from multiple channels, analyze customer sentiments, and pick up patterns in those feelings to allow marketing and product to align with customer trends.
  • Recommendation engines like those used by Netflix streaming service built AI into its platform to learn about user preferences, curate viewing options based on those likings, and then measure how these selections affect things like bounce rate and free-trial conversion.

The Power of AI

The future of AI in dating holds the promise of becoming even more sophisticated and accurate. One of the biggest hurdles in improving compatibility matches for on-line dating sites is with client honesty.  Roughly 54 percent of online daters reported that a potential matches seriously misrepresented themselves on their profiles. The same predictive analysis that currently drives product recommendation engines could be applied to matchmaking, moving online dating beyond user-generated profiles to include data like streaming music playlists, shopping histories, and the sentiment of a person’s social shares. This way, if a 56-year-old male tries to list his age as ‘42’ and wants to claim that he likes long walks on the beach when he hasn’t been to the beach in decades, analysis of his social media posts, likes and history should be able to prompt him to alter his answers which would ultimately improve his chances of finding a compatible mate.

In Tune with Technology: Music and Mellanox Networking Solutions

This blog post was co-authored by Scot Schultz and Jeff Shao.

Music. It’s been around since prehistoric times and according to philosopher Plato, it is music that gives soul to the universe, wings to the mind and flight to the imagination. Fascinatingly enough, Music is one of the few activities in life that utilizes the entire human brain. Listening to music has also been shown to improve the immune systems of adults. No doubt about it, music is essential to defining what makes us human.

Sunday, January 28 will mark the 60th Annual Grammy Awards. Originally, winners received what was called a Gramophone award, the technology of the time being a gramophone, an early version of modern record players, to play music.

 

Technology for music has come a long way since the gramophone. From 1888 when Thomas Edison introduced the electric motor-driven phonograph; to 1948 when the first 33 1/3 vinyl record was introduced by Columbia Records; up through 1954 when transistor radios made music portable to 1982 when CDs became all the rage, technology has driven music to new levels of accessibility and quality.

 

Music continues to make strides in how it is consumed and produced and many of these advancements have been made possible by innovation direct from the networking industry. More and more music is being consumed well beyond your typical MTV format. Technology has enabled music to proliferate everywhere.  Digital formats have dominated the recording industry and allowed our portable mp3s, mobile phones and wearable devices to take our favorite music and even stream our audio content from anywhere in the world in real time.

We take it for granted when listening to music from YouTube or iTunes nowadays. However, audio or music digitalizing has gone a long way, almost inconspicuously. Still remember how much time it took to download a couple megabyte MP3 file over a 56K dialup modem, if you ever tried? It is unimaginable to think of downloading an uncompressed HQ FLAC file at the same speed. Along with the evolution of audio codecs, we have come from 144kbps MPEG-2 audio to 4,000kbps MPEG-4 stereo audio. The size of a song in mp3 grows 6-7 times from standard quality (e.g., 256kbps) to uncompressed high quality (over 1Mbps). Today, the typical size of a 5-minute high quality mp3 soundtrack is easily around 10MB or higher. When many audio files or streams are uploaded or downloaded to the cloud, being edited and mastered in multi-media production, or being broadcasted over high definition audio channel such as in DAB and FM HD, much higher networking bandwidth is called for.

For this very reason, the M&E industry and its standard bodies are rushing to bring out standards and laid out the new infrastructures to support such overwhelming needs. To facilitate high-speed transmission of high-quality digital media content, in September, 2017, SMTPE approved ST 2110 Standards for Professional Media Over Managed IP Networks. SMTPE 2110 is an all-IP-based standard that “breaks” video, audio (and processing of audio in multiple languages) and ancillary data in independent streams, but keeps them synchronized and synthesized at the end point. As adoption of this standard progresses, the advantages of shifting to this type of standard are comparable to those achieved when the industry moved from physical tapes to virtual files for content storage. Files are not treated as if they were just virtual tapes; rather, all the benefits of software and virtualized access have come to be realized with new workflows and functionality offered.

Another advantage is that intra-facility traffic now can be all internet protocol-based. Thus, rather than requiring two separate sets of switches — SDI (Serial Digital Interface) switches for digital video media and IP/Ethernet switches for general data — facilities can rely on one common data center infrastructure which is where Mellanox plays with perfect pitch.

The speed of SDI technology has not kept up with the accelerating network speeds and the bandwidth that is capable of IP (Internet Protocol). Ethernet-based IP networks are now commanding attention across the entire broadcast television ecosystem. As bitrates increase and equipment prices drop, IP-based communication technologies are pushing more and more specialized communication systems such as SDI into retirement. One thing is clear; higher quality content (HD, UHD, 4K and beyond) requires more horsepower and bandwidth. The number of UHD/HD video channels that can be squeezed into a 25GigE or even 100GigE connection is a convincing argument for migration to IP.

Mellanox IP-based switches ensure an efficient broadcast network with proven low-latency, and high-bandwidth connectivity while delivering extremely reliable video through the following:

  • Predictable buffer allocation: Flexible large buffer pool which is available to all ports on the switch allows storing of bursty traffic which is prevalent in broadcast applications and ensures that all packets are forwarded as required.
  • QoS with DSCP marking: Allows the marking of important flows to assist with ensuring a non-blocking fabric.
  • Predictable Network Performance: Consistently deliver predictable performance for all packet size. This has been described and well documented in the third party Tolly test report.
  • Consistent and very low port-to-port latency and jitter: Both port-to-port latency and Packet Delay Variation (PDV) have been tested by leading media transport solution providers and Mellanox had the lowest and most deterministic latency out of all the vendors under test.
  • In fabric containerized broadcast services: By containerizing IP studio services and running them on the switch allows broadcast engineers to focus on building ideal IP media fabric for their studio without utilizing additional servers and virtual machines.

Mellanox Spectrum switches allows broadcasters to save time and money, and become more innovative.

Most operators will continue to separate traffic by priority; however, Mellanox switches have the intelligence to prioritize real-time media streams. And SMPTE ST 2110 standards are video-format-agnostic and therefore support Ultra HD, HDR, and other new and emerging formats.

Another area of technology that music is being heavily influenced is Artificial Intelligence. The artificial intelligence (AI) industry is expected to be worth more than $70 billion by 2020 and will continue to exert influence on our lifestyles and the way we consume data.

 

Popgun claims to have created the first superhuman AI-powered musician, learning from human musicians and complements or augments music compositions. Amper is an AI-enabled music composer, performer, and producer; creating music from scratch. Pacemaker is an AI DJ that creates digi-mixes and re-mixes from streams, not from digital files. Weav conceives a song not as the final product, but as a recipe for variations of itself along dimensions like tempo, energy, and mood. Jaak uses blockchain technology to identify the usage and rights to song streams.And these are but a few examples of how AI is driving the music industry. AI, in turn, relies on the power of Mellanox solutions.

A recent use of AI came from IBM’s Watson which sifted through five years of unstructured culture data, as well as the lyrics of the top 100 songs for each week within the five year frame to establish the emotional sentiment behind each song. Watson then analyzed the instrumentation, rhythm and pitch which was then given to artist Alex Da Kid to experiment with. Ultimately, IBM’s Watson became the collaborator on the artist’s new song, entitled, appropriately enough, “Not Easy”. Imagine what a Beethoven or Gershwin could have done with the same data

Supporting Resources:

 

Stanford Researcher is the Medical Worlds’ Braveheart

A normal human heart has four chambers: two upper chambers called atria, which receive blood into the heart, and two lower chambers called ventricles, which pump blood out of the heart. A single ventricle defect, also called single defect anomaly or single ventricle physiology, refers to a variety of cardiac defects where only one of the heart’s two ventricles functions properly. A single ventricle defect is a congenital (meaning it is present at birth) condition and is among the most complex of heart defects. This condition occurs in five of every 100,000 live births as the heart develops during the first eight weeks of the mother’s pregnancy. Current surgical correction, which has been practiced over the past five decades, shows poor outcome with mortality rates as high as 30 percent. This is complicated by the fact that the condition requires multiple, complex surgeries and while survival rates continue to improve, the condition still carries some of the highest morbidity and mortality rates related to congenital heart surgery.

That’s where Stanford Researcher and Assistant Professor at Cornell, Dr. Mahdi Esmaily, comes in. Dr. Esmaily, or Dr. Braveheart, as some of us like to call him, has been conducting research that is nothing short of ground breaking and lifesaving. Dr. Esmaily, Postdoctoral scholar at the Center for Turbulence Research – Stanford University, has applied his engineering expertise to a long standing problem in the medical field. He has conducted computational simulations of blood flow to evaluate an innovative new surgical procedure. The outcome is that infant cardiac surgeons may be able to use a radical new surgery by adopting a frankly engineering concept. This is accomplished by constructing an anatomy that is inspired by an ejector pump, a device that is typically found in industrial power plants. In this operation, the high-energy flow from the systemic circulation is injected into the low-energy flow from the upper body, assisting its drainage into the pulmonary arteries. This is counter to the current surgery which is known to cause inadequate pulmonary blood flow in already stressed patients ‒ particularly true in tiny infants.

The High Performance Computing Center (HPCC) at Stanford University was founded to provide high performance computing resources and services that enable computationally-intensive research within the School of Engineering and to support the efforts of scientists performing sponsored research. The High Performance Computing Center leverages Mellanox InfiniBand in their High Performance Computing research to enable larger simulations, analyses and faster computation times than are possible using computers available to individual researchers.

Dr. Esmaily used high performance computing resources to simulate realistic 3-dimensional models of neonatal circulations. These simulations predicted that his proposed surgery provides significantly higher oxygen delivery at a lower single-ventricular workload in comparison to the conventional operation, meaning that part of the heart will not have to work as hard. The conclusion is that while there is more research to be done, the use of an ejector pump, augmented by Dr. Esmaily’s technique, holds the promise of giving surgeons a brand new, less risky option, for newborns undergoing this kind of surgery and with it, hope for the families of infants born with heart defects.

Supporting Resources:

Climate Change and Global Health: What NCI is Doing to Help

Climate change is poised to become what the prestigious British medical publication The Lancet has called “the biggest global health threat of the 21st century.” The health risks associated with climate change are numerous and alarming. Just to name a few: increases in heat-related illnesses and death; extreme weather-related injuries and mortality; aggravated chronic illnesses; spread of infectious diseases; increases in asthma and respiratory allergies, and upsurge in chronic respiratory disorders; rising malnutrition and child development complications; increases in stress-related and mental health disorders…the list goes on and on. In addition, there are tangible impacts related to both population displacement and migration; as well as climate-triggered instability (famine) and subsequent conflict. The healthcare sector is just beginning to understand that climate change will have major impacts on health care costs, services and delivery. The World Health Organization has estimated some of impact of climate change to be:

  • Between 2030 and 2050, climate change is expected to cause approximately 250,000 additional deaths per year, from malnutrition, malaria, diarrhea and heat stress.
  • The direct damage costs to health (i.e. excluding costs in health-determining sectors such as agriculture and water and sanitation), is estimated to be between US$ 2-4 billion/year by 2030.
  • Areas with weak health infrastructure – mostly in developing countries – will be the least able to cope without assistance to prepare and respond.
  • Reducing emissions of greenhouse gases through better transport, food and energy-use choices can result in improved health, particularly through reduced air pollution.

This is why climate change and tracking the climate has such far-reaching implications. Last quarter, I wrote about the growing impact of global warming on The Great Barrier Reef and coral bleaching and the ground-breaking research being done at the Australia’s National Computational Institute. Now, I’d like to highlight the important work they are doing in climate modeling.

ACCESS is Australia’s national earth system science and climate modelling suite, a complex coupled-system model that comprises atmosphere, ocean, sea-ice, land, and other components — derived from the best of the UK, USA, France, and Australia to provide national weather and climate prediction capability. The model’s complexity comes from the vast span of time scales over which it has to work: from hours for extreme weather events (storms and bushfires), days for general weather prediction, months for seasonal prediction, decades for planning for environmental change, through to centuries and millennia for long-term climate change.

ACCESS is developed through a collaboration of the Bureau of Meteorology, CSIRO, the academic community through the ARC Centre of Excellence in Climate System Science, and NCI.

A flooded road with depth indications in Queensland, Australia.

NCI’s role is mission critical, not only as the collaborative, integrated development platform, but also through its unique expertise in optimizing the performance of key model components. Performance improvements of 30-40 percent and much higher code scalability (up to 20-fold improvements with some codes now exploiting up to 20,000 cores), are enabling greater operational efficiency and productivity with faster time to results, more accurate simulations that enable new scientific outcomes and insight, and much heightened prediction capability.

In real terms, the outcomes are wide-ranging and include multi-billion dollar benefits for agriculture through more accurate seasonal prediction, the reduction of severe economic losses and the mitigation of dangers to public safety and health from extreme weather events.

As part of this, NCI is using Mellanox’s interconnect solutions allow for faster inter-node connectivity and access to storage, providing Australian researchers and scientific research organizations with critical on-demand access to NCI’s high-performance cloud. This cloud facilitates scientific workloads with a deployment that combines the Mellanox CloudX solution with OpenStack software to support high performance workloads on a scalable and easy to manage cloud platform. CloudX simplifies and automates the orchestration of cloud platforms and reduces deployment time from days to hours. The NCI deployment is based on Mellanox 40/56 Gb/s Virtual Protocol Interconnect adapters and switches supporting both InfiniBand and Ethernet. NCI also has Mellanox’s 100Gbit/s EDR InfiniBand interconnect for its new Lenovo NextScale supercomputer. This powerful combination of storage and compute power enable NCI to deliver extremely complex simulations and more accurate predictions, all with the aim of improving the human condition.

 Supporting Resources:

 

How Artificial Intelligence is Revolutionizing Personalized Medicine

This is actually happening now with the help of Artificial Intelligence (AI).  The University of Tokyo recently reported that Watson, IBM’s cognitive supercomputer, correctly diagnosed a rare form of leukemia in a 60-year-old woman. Doctors originally thought the woman had acute myeloid leukemia, but after examining 20 million cancer research papers in 10 minutes, Watson was able to correctly determine the actual disease and recommend a personalized treatment plan.  AI – and its related applications, – are changing healthcare as we know it. The advancements made in AI will revolutionize research and, ultimately, personalized medicine.

The Historical Challenge with Data

Big Data has been a buzz word for several years now. Hospitals, like enterprises, have been drowning in big data. From the moment doctors begin keeping patient records, they – and now hospitals – have been amassing large quantities of complex data within patient medical records; including handwritten notes, X-ray results, blood samples, vital signs, DNA sequences, and more. Historically, this data has been disparate and existed in hard copies only, making it nearly impossible to analyze in aggregate. Now with AI, analytic tools and other technological advancements, there is a way to actually organize, analyze and cross reference the data, enabling hospitals, doctors and researchers to finally put that data to use.

Preparing for Impact

AI is impacting nearly every aspect of the healthcare industry from patient care such as the examples described herein to hospital security and pharmaceutical drug development (stay tuned for a future post on how AI may just be the solution to rising drug prices). Mellanox is committed to the cause and is helping to accelerate many of the world’s leading AI, ML and DL systems with solutions like RDMAGPUDirect RDMAScalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ and intelligent interconnects that are able to handle the highest rates of real-time data and mitigate network congestion.

AI Revolutionizes Personalized Medicine

While there is generally a solution to any problem, often times it isn’t that we can’t see the solution, it’s that we can’t correctly identify the problem. AI is able to learn from each piece of data it is given and rapidly re-evaluate its analysis as more data and more is received. This enables doctors and researchers to better identify problems and, subsequently, the potential solutions to those problems. A door to a world of possibilities has now been opened, and with it, the potential to find cures for the thus-far incurable diseases, perhaps even within our lifetime.

AI is not limited to traditional data on a spread sheet. It can interpret and aggregate imaging, text, handwritten notes, test results, sensor data, and even demographic and geo-spatial data. AI will be able to cross reference data, find commonalities and draw insights that were previously impossible due to data silos or the sheer amount of time it would take for a human to crunch the numbers. It can also consider seemingly unrelated or outside factors that doctors and researchers may not immediate see as relevant. For example, environmental factor, such as elevation, humidity and proximity to certain dense mineral deposits, factories or agriculture. This ability to rapidly analyze data, and potential correlations, creates a more comprehensive and holistic view into a patient’s health.

AI in Action Today

  • The National Cancer Institute has partnered with NVIDIA to develop an AI framework – powered by Mellanox InfiniBand adapters and aimed at supercharging cancer research. The framework, CANDLE (Cancer Distributed Learning Environment), will leverage AI to extract and study millions of patient records with the goal of understanding how cancer spreads and reoccurs. This is an example of AI being able to pour through large amounts of genomic data in a quick manner so doctors can draw conclusions.
  • A recent study published in Neurobiology of Aging found that AI could help detect signs of Alzheimer’s in patient brain scans before doctors. AI is currently being used to study scans of healthy brains and brains affected by Alzheimer’s to learn and identify the telling systems of the disease.
  • Medecision, a leading global healthcare information technology provider, is also employing AI to sweep through large amounts of data, identifying variables and predicting avoidable hospitalizations in diabetes patients. The medical community has only just begun to scratch the surface of what can be achieved with AI.

Supporting Resources:

The Heat is On

It’s on everyone’s bucket list; to experience the Great Barrier Reef. In fact, Traveler Magazine puts in on the Ultimate Travel Bucket List. However, before you book that dream vacation to this World Heritage site, know that the reef is fragile and scientific research says it is getting more fragile with each passing day. So much so that scientists are working frantically to preserve it. In fact, the work being done at Australia’s National Computational Infrastructure is nothing less than a virtual wake up call for as all of us about to the impact global warming is having on the reef.

NCI is a national provider of high-performance research computing and data services. NCI systems currently support around 4,000 researchers working on more than 500 projects. These researchers come from more than 34 Australian universities, national science agencies and medical research institutes, with an increasing industry user base. As Australia’s national research computing service, the organization provides world-class, high-end services to Australia’s researchers, the primary objectives of which are to raise the ambition, impact, and outcomes of Australian research through access to advance computational and data-intensive methods, support, and high-performance infrastructure.

Back in 2013, NCI selected Mellanox’s interconnect to support Australia’s national research computing infrastructure which provides world-class, high-end services to Australia’s researchers. Mellanox’s interconnect solutions allow for faster inter-node connectivity and access to storage, providing researchers and scientific research organizations with critical on-demand access to NCI’s high-performance cloud. The system is designed around Mellanox CloudX™ , a reference architecture that provides the most efficient, highest performing scalable clouds based on Mellanox’s superior interconnect.

Then, in 2016, NCI chose Mellanox’s 100Gbit/s EDR InfiniBand interconnect for its newest Lenovo® NextScale supercomputer. The new system added a whopping 40 percent performance increase in NCI computational capacity.

Some research can take years — even decades — to come to fruition but at NCI, the results are already in.  Earlier in 2017, it was reported that more than two-thirds of the coral in Australia’s Great Barrier Reef was experiencing enormous amounts of bleaching.  Similar bleaching events occurred in 2016, and 2017 has already devastated a 1,500 km (900 miles) stretch of the reef. Before 2016, there had only been two bleaching events along the Great Barrier Reef in the past two decades, reflecting an alarming trend. That’s when researchers came to NCI to get their work done.

Leveraging the computing resources at NCI, researchers looked at four key extreme Australian events; the Angry Summer 2012/13; the Coral Sea marine heatwave of 2016; the severe rain event in NE Australia in 2010; and the 2006 drought in southeast Australia. They modeled how often these events would occur under each scenario. Results showed that keeping global temperatures from rising less than 1.5°C would have a clear benefit for Australia in terms of reducing extreme weather events and the costs associated with such incidences. Further, scientists at NCI reported that if the global average surface temperatures increase just 1.5°C above pre-industrial conditions, then a repeat of the coral bleaching that severely damaged the Great Barrier Reef earlier this year would become more than twice as likely. Scientific modeling results from NCI also revealed that if the world heats up by just 2°C more than the pre-industrial world, it nearly triples the odds of another mass bleaching event.

These findings from University of Melbourne scientists at the ARC Centre of Excellence for Climate System Science, are the result of research using NCI’s Mellanox-enabled HPC facilities to look at how Australian extremes in heat, drought, precipitation and ocean warming will change in a world 1.5°C and 2°C warmer than pre-industrial conditions.

As the world continues to vigorously debate the impact of global warming, NCI is helping to bring the facts to the table — not in months or weeks, but now, today.  For a World Heritage site in peril, these findings have long-term implications that require action now.

 

Supporting Resources:

 

Mellanox with IBM Research: Deep Learning, THIS is how you do it!

Mellanox’s role in record setting deep learning performance with IBM PowerAI

This year has already proven to be a game-changer in the next generation performance for artificial intelligence, as the race continues to solve the challenges of scalable deep learning.  Most popular deep learning frameworks today can scale to multiple GPUs in a server, but its difficult with multiple servers with GPUs.

This is challenge in particular is where Mellanox has been the clear leader as the only interconnect solution able to deliver the needed performance and offload capabilities to unlock the power of scalable AI.

IBM Research just announced their amazing achievement in unprecedented performance and close to ideal scaling with new distributed deep learning software which achieved record communication overhead and 95% scaling efficiency on the Caffe deep learning framework with Mellanox InfiniBand and over 256 NVIDIA GPUs in 64 IBM Power systems. 

With the IBM DDL (Distributed Deep Learning) library, it took just 7 hours to train ImageNet-22K using ResNet-101. From 16 days down to 7 hours changes the workflow of data scientists.

 

That’s a 58x speedup!

 

You can read more at the IBM blogs : https://www.ibm.com/blogs/research/2017/08/distributed-deep-learning/ and https://www.ibm.com/blogs/systems/scaling-tensorflow-and-caffe-to-256-gpus/

And download the whitepaper here : https://arxiv.org/abs/1708.02188

A technical preview of this IBM Research Distributed Deep Learning code is available today in IBM PowerAI 4.0 distribution for TensorFlow and Caffe.

 

EPYC-Curious? A New Datacenter Recipe for a Refreshing and Satisfying Datacenter Featuring Mellanox’s Key Ingredients

Hungry for something new and innovative? At the SIGGRAPH 2017 event, AMD showcased their latest EPYC™ processors and 80 Radeon™ Instinct MI25 accelerators featuring Mellanox’s EDR 100Gb/s InfiniBand Smart Interconnect. This distinctive new blend of ingredients ushers in a new era in tackling today’s most complex workloads. Unveiling the first systems from Inventec, the single rack of Inventec P47 systems can unleash 1 PetaFLOPS of compute power at full 32-bit precision, well suited to address the most challenging deep learning, HPC and enterprise-class applications. This is another advancement in the way we think of datacenter deployments, because this enables more cores, threads, compute units, IO lanes and memory channels in use at one time than in any other system in history.

These latest Inventec P47 systems include Samsung Electronics HBM2 memory which is used across the Vega architecture Radeon Instinct MI25 accelerators. Samsung also provided high-performance NVMe SSD storage and high-speed DDR4 memory. For connecting the Inventec P47 systems, AMD and Mellanox Technologies teamed up to ensure the full potential of this state-of-the-art compute-powerhouse. Because high-performance compute and artificial intelligence rely on increasingly complex computations, it’s causing data bandwidth and high speed storage requirements to spiral upward. The most savvy users in the world already deploy Mellanox end-to-end, knowing there is no other interconnect solution that can even compete at the high-end of High Performance Computing (HPC) environments and Enterprise Data Centers (EDC) they want to maximize their performance-per-TCO-dollar. They will need every last bit of bandwidth delivered to this new class of compute and I/O performance with Mellanox’s InfiniBand high-speed, and ultra-low latency smart HCA’s and switches.

Welcome again, AMD to the race in providing the key ingredients to satisfy the most data-hungry datacenters with capabilities that are ready for the challenges of today and the workloads of tomorrow!

 

 

 

Mellanox Educates on Caffe, Chainer, and TensorFlow

Why Mellanox is Participating in the OpenPOWER Developer Congress

Mellanox is not only a founding member of the OpenPOWER Foundation, but also a founding member of its Machine Learning Work Group.  AI/cognitive computing will improve our quality of life, drive emerging markets, and surely play a leading role in global economics. But to achieve real scalable performance with AI, the ability to leverage cutting-edge interconnect capabilities is paramount. Typical vanilla networking just doesn’t scale, so it’s important that developers are aware of the additional performance that can be achieved by understanding the critical role of the network.

Because Deep Learning applications are well-suited to exploit the POWER architecture, it is also extremely important to have an advanced network that unlocks the scalable performance of deep learning systems, and that is where the Mellanox interconnect comes in. The benefits of RDMA, ultra-low latency, and In-Network Computing deliver an optimal environment for data-ingest at the critical performance levels required by POWER-based systems.

Mellanox is committed to working with the industry’s thought leaders to drive technologies in the most open way. Its core audience has always been end users — understanding their challenges and working with them to deliver real solutions. Today, more than ever, the developers, data-centric architects, and data scientists are the new generation of end users that drive the data center. They are defining the requirements of the data center, establishing its performance metrics, and delivering the fastest time to solution by exploiting the capabilities of the OpenPOWER architecture.  Mellanox believes that participating in the OpenPOWER Developer Congress gives us an opportunity to educate developers on its state-of-art-networking and also demonstrates its commitment to innovation with open development and open standards.

 

What is Mellanox Bringing to the Developer Congress?

Mellanox will provide on-site expertise to discuss the capabilities of Mellanox Interconnect Solutions. Dror Goldenberg, VP of Software Architecture at Mellanox, will be present to further dive into areas of machine learning acceleration and the frameworks that already take advantage of Mellanox capabilities, such as Caffe, Chainer, TensorFlow, and others.

Mellanox is the interconnect leader in AI/cognitive computing data centers, and already accelerates machine learning frameworks to achieve from 2x to 18x speedup for image recognition, NLP, voice recognition, and more. The company’s goal is to assist developers with their applications to achieve maximum scalability on POWER-based systems.

For more information check out: https://openpowerfoundation.org/openpower-developer-congress/