The OpenStack Summit will be held May 18-22, 2015 in Vancouver, Canada. The OpenStack Foundation allows its member community to vote for the presentations they are most interested in viewing for the summit. Many presentations have been submitted for this event, and voting is now open. We have updated this post with additional sessions submitted by Mellanox and our partner organizations.
In order to vote, you will need to register to the OpenStack Foundation: https://www.openstack.org/join/register/. Voting for all presentations closes on Monday, February 23 at 5:00 PM CST (GMT-6:00).
For your reference, we have included a list of Mellanox sessions below, click on the title to submit your vote:
Excerpt: The World is turning to cloud rapidly to achieve elasticity, higher efficiency and lower operational expenses either using Converged or hyper-converged infrastructure. Though converged infrastructure provides a well architected building blocks offering an integrated product that is properly sized, validated and tested, there are situations where an application/solution-based tighter integration with software-defined storage is prudent.
Whether converged or hyper-converged, the components that make the building blocks are compute, storage, networking and virtualization that runs on top of them. In a virtualization environment based on VMWare vSphere or Microsoft Hyper-V, the virtualization and data management layers are bundled. However, most OpenStack deployments, which are based open virtualization like KVM or Docker container, does not provide integrated data management. In this presentation, we present the challenges faced while deploying a hyper-converged environment using commodity off the shelf components with open-sourced OpenStack and how to make a seamless migration from a converged to hyper-converged architecture.
Presenters: Anat Kleinmann, Aviram Bar Haim (Mellanox) along with Evgeniya Shumakher (Mirantis)
Excerpt: Supporting multi-vendor open environment is an exciting challenge, it has to be flexible and yet support large variety of features and partners as well as adjusted to many type of customers in a short period of time. So we are glad to talk not only about problems but also about the solution to the problem, which Fuel team found together with Mellanox.
Presenters: Matthew Sheard (Mellanox) along with Alise Spence (Power Systems Cloud), Antonio Rosales, Amy Anderson, Roger Levy (MariaDB Corp)
Excerpt: If your organization is like most, you have applications and services that rely on a LAMP stack. Come to this session to hear from a panel of LAMP stack companies (Canonical, MariaDB, Zend and Mellanox) and learn what these companies have discovered on how processor architecture affects performance for critical applications being deployed with OpenStack.
Presenters: Anat Klienmann, Erez Cohen (Mellanox)
Excerpt: With the flexibility that OpenStack offers – customization of virtual machines, elasticity of resources and combination of different storage technologies with compute instances – cloud should be a dream-come-true for HPC users. But with the advantages in migrating to cloud comes clear challenges, largely from the fundamental architecture differences between HPC and Cloud. Traditional HPC cluster is performance critical and is optimized for applications with tuned network, storage and user-land communication to take full advantage of the bare metal infrastructure.
In the cloud however, applications are designed to run in virtualized environments with the help of hypervisors, virtual networks, and shared, virtualized storage which impacts performance significantly. HPC customers also face the challenge of moving massive chunks of data to and from the cloud and the threat of cloud vendor lock-in. In this presentation, we will investigate if it is really possible to move HPC workload to the cloud while maintaining bare metal like performance.
Presenter: Clohe Ma (Mellanox)
Excerpt: Promising to increase service agility and scalability, NFV is developing at an unprecedented pace. Virtualized network function vendors are scrambling to move their software from running on special purpose appliances to running on virtual machines. But by simply porting the code, the VNFs still can’t take full advantages of the cloud infrastructure to achieve scale-out elasticity and high availability. Smart VNF vendors are re-architecting their applications to be cloud native with smaller failure domains, stateless transaction processing and state storage pools.
Presenter: Clohe Ma (Mellanox)
Excerpt: NFV separates network service software from the hardware it runs on, which means that the underlying NFV infrastructure needs to be able to accommodate a variety of VNFs that have different requirement on storage, networking and IPC performance and latency. At the same time, by splitting up a network service to multiple VNFs that can potentially be running on VMs on different physical servers, a side effect is that the precious CPU resources are wasted processing packets I/O between VNFs instead of being dedicated to the actual application processing such as DPI or signature matching for threat defense. In this session, we present a way to offload packet I/O processing through RDMA (Remote Direct Memory Access) over Converged Ethernet (RoCE), which can result in minimal CPU cycles for packet I/O and significantly higher efficiency of the CPU for service processing.
Presenter: Clohe Ma (Mellanox)
Excerpt: In this session, we present an innovative way of implementing VNF service graphs through a set of proven technologies that High Performance Computing and Hyper-scale cloud providers have been using, a combination of efficient IPC to pass only a portion of the packets packet and metadata through the service chain and use of Remote Direct Memory Access (RDMA) technologies to access the packet buffer when needed at very low latency and very little CPU overhead. We will demonstrate the service latency and efficiency improvements with an actual telco service.
Presenter: John Kim (Mellanox), Veda Shankar (RedHat)
Excerpt: GlusterFS has traditionally been used for scale-out file storage for various technical file workloads. The newly released Red Hat Storage version 3.0, based on GlusterFS 3.6, introduces features which will allow customers to deploy an open, scale-out and software-defined storage solution capable of scaling to petabytes of capacity across physical, virtual and cloud environments. Red Hat Storage now offers a Hadoop File System plugin that is integrated with Hortonworks Data Platform 2.1 with RDMA support over InfiniBand, and GlusterFS volume level snapshots. The solution provides improved scalability with support for 60 disk drives per storage node and improved object access using an Openstack IceHouse rebase for the Swift object storage support. By attending this session, audience members will learn how this updated storage solution can achieve higher performance and capacity scale to meet the increasing demands from growing markets such as high performance computing, big data, cloud applications, and large scale web 2.0 deployments.
Presenters: Ramnath Sai Sagar (Mellanox), Ben Cherian (RedHat)
Excerpt: With an estimated 15 billion connected devices generating Exabytes of data this year, coupled with enterprises racing towards OpenStack, there is a need for efficient scale-out storage that is also feasible from a CapEx and OpEx perspective. A new approach is required to manage this immense amount of invaluable information. Enter CEPH, a massively scalable, open source, software-defined storage solution, which uniquely provides object, block and file system services with a single unified Storage Cluster. The power of CEPH can transform commodity hardware into a scalable, reliable, fault-tolerant and intelligent storage solution. The architecture of scale-out solution mandates the need for high performance efficient networking. However, deep analysis has shown that CEPH’s performance can be limited, even by a 40GbE network, especially when flash storage is used with many object storage devices (OSDs). In this presentation we talk on why CEPH is dubbed as the de-factor storage backend for OpenStack and can high-capacity network with RDMA integration truly leverage the power of Ceph?
Presenters: Deepak Shetty, Ramana Raja, Csaba Henk (RedHat)
Excerpt: In recent times, there has been a lot of work in Manila to provide a framework for NFS-Ganesha which is user space NFS server, which can be exploited by storage backend drivers intending to supports NFS protocols. This talk will talk about the NFS-Ganesha work and how new driver authors can benefit from it. It will also walk thru ‘cert’ based access type covering the how it works, what admins and users can expect from it, and how to provision shares that provide multi-tenant isolation and secure access to the backend storage.