Dell Technologies HPC Community

  • Home
  • Join
  • About
    • Mission and Objectives
    • Advisory Board
    • Team
  • Events
  • Slack
  • DellXL
    • Join Dell XL
    • Dell XL Meetings
    • DellXL Charter
    • Dell XL Members
    • Dell XL Board
    • Board Members Area
  • Home
  • Join
  • About
    • Mission and Objectives
    • Advisory Board
    • Team
  • Events
  • Slack
  • DellXL
    • Join Dell XL
    • Dell XL Meetings
    • DellXL Charter
    • Dell XL Members
    • Dell XL Board
    • Board Members Area

Thank you...

to all the presenters and attendees of the Dell HPC Community meeting in Austin, TX  in March 2017.
THESE PRESENTATIONS CONTAIN DELL CONFIDENTIAL INFORMATION AND REQUIRE A DELL NDA (NON-DISCLOSURE AGREEMENT). THESE PRESENTATIONS AND DOCUMENTS MAY NOT BE REPRODUCED, COPIED, OR PROVIDED ELECTRONICALLY OR AS HARDCOPY TO NONMEMBERS. 

March 2017 Presentations

Thomas Lippert
Jülich Supercomputer Center
Director
lippert.pdf
File Size: 29547 kb
File Type: pdf
Download File

Who is afraid of Amdahl’s Law? - Optimizing Scalability by Modular Supercomputing

​It is one of the major tenets of Parallel Computing that Amdahl's Law imposes a threshold on the possible speedups achievable on parallel systems. The speedup is determined by the relation between code parts that can be executed in parallel and those that are not parallelizable and thus require sequential execution on one or allow parallel execution only on a few processing elements. We all know, of course, that this picture is a theoretical one as it basically applies to strong scaling, i.e., a situation with fixed problem size and growing numbers of processors. In real life, however, most codes provide a set of different or even dynamically changingl concurrency levels, modern architectures are equipped with hierarchies of parallelized units, and only a few problems require strong scaling.

Does this mean that Amdahl's Law has become obsolete? Not at all, as it comes back hidden under different levels of concurrency. Therefore, we should take it's implications more seriously. Isn't it obvious that we should adapt the right architectural piece of hardware to the corresponding code-immanent concurrency at the right level of granularity?

This is the central paradigm of our modular supercomputing concept. The modular approach has the potential to minimize execution time while being most energy efficient, simply since code parts with a given intrinsic concurrency sweet spot are mapped onto corresponding architectural counterparts.  

​Gregory Newby
Compute Canada​
newby.pdf
File Size: 1951 kb
File Type: pdf
Download File


​Compute Canada's Approach to Data-Centric HPC
Compute Canada is the national provider of advanced research computing in the academic environment, with over 11,000 active users.  An ongoing technology refresh program is creating shared data services, and integrating practices for the data hierarchy across multiple sites.  This session will describe the technology refresh program and sites, uses of software defined networking and commodity storage building blocks, the role of object storage, the 100Gb wide-area network and Science DMZ, and the data hierachy from compute node to nearline/archive.
​
Ted Barragy
CGG
Manager, Advanced Systems


barragy.pdf
File Size: 1532 kb
File Type: pdf
Download File

Overview of CGG’s Seismic Processing Storage System

​CGG is an Oil&Gas service company operating multiple datacenters worldwide.  These datacenters are devoted primarily to processing seismic datasets for both internal and external customers.  This talk will focus on the storage subsystem within these datacenters and is organized as follows.  A brief description of the CGG datacenter architecture is given first including Storage, Compute and ‘Seismic Operating System’ components. The Compute and Seismic OS components are covered because they define several constraints on the storage system.  Next, a brief description of Dell based storage building blocks is given along some brief remarks on their evolution.  This is followed by details on the storage workload characteristics.  These particular characteristics correspond to the main challenges faced in designing this storage system: data growth, lifetime, temperature and access rates.  We close with a brief look at next generation storage systems under evaluation.
​
Bill Magro
Intel
Intel Fellow & Chief Technologist
Data-centric HPC with Intel’s Scalable System Framework

​As the scale and complexity of data analytics rises, a large and growing fraction of these problems can rightly be considered HPC workloads. But, bringing the benefits of HPC system technologies to analytics workloads has proven difficult, due to significant architectural differences between analytics and HPC platforms. We will discuss the key challenges and opportunities, and how Intel is working to enable converged, high-performance platforms for simulation, modeling, analytics, and visualization via its Scalable System Framework.
Paul Calleja
University of Cambridge
​Director, HPC
​​
Clinical and Biomedical Platforms for HPC and HPDA

Either Hardware or software components will be covered with special attention to the status of future research projects involving several Departments of the University.
​
James Lowey
TGEN
​CIO
​​
Storage Infrastructure for Precision Medicine

​After working with Next Generation Sequencing (NGS) workflows for many years using a more or less "traditional" HPC infrastructure, TGen is adopting a new  storage infrastructure within it's HPC systems to help speed up, scale and optimize current and future workflows as genomics continues to transition from the lab to the clinic.
​
Ron Hawkins
SDSC
​​Technology Executive
Life Sciences Research Computing at the San Diego Supercomputer Center 

​​Located at the UC San Diego campus on the “Torrey Pines Mesa” biotech hub, the San Diego Supercomputer Center is at the epicenter of groundbreaking life sciences research being conducted by researchers at UC San Diego, local research institutes, and biotech companies.  This presentation will provide an overview of computing and storage support being provided to life sciences researchers by SDSC and results of a recent study benchmarking key genomics and cryo-EM applications on Dell EMC systems.
​
Vanessa Borcherding
Weill Cornell Medical College
​​Director, Scientific Computing
borcherding.pdf
File Size: 2321 kb
File Type: pdf
Download File

Quelling the Clamor for Containers

Like many other HPC centers, we've gotten lots of requests to enable containers in our HPC environment.  While we're still just dipping our toes in the water, we'd like to share the lessons and caveats we've learned so far.
​
Happy Sithole
Center for High Performance Computing
​​Director
sithole.pdf
File Size: 59341 kb
File Type: pdf
Download File

South Africa's HPC Investment and Valuable Partnerships 

The Center for High Performance Computing is South Africa's premier HPC provider working with universities and industry. In the past decade, significant investment has been provided by the Department of Science and Technology, to build the HPC services and demonstrate return on investment. Currently, these efforts resulted in Petascale system in the continent, and a wide range of success in different domain areas. The success of the center is attributed to a strong ecosystem,with OEMs, universities and the broader continent. This talk will cover all these areas and showcase the importance of partnerships in all levels to achieve success. 
​
Steven Stein
Intel
​​Product Marketing Manager
stein.pdf
File Size: 2037 kb
File Type: pdf
Download File

Deep Learning is an incredible technology that allows machines to learn insights from massive data sets. While the accuracy of deep learning techniques is approaching that of humans, the amount of computing required to properly train a model is massive. Specifically, Deep Learning algorithms require massive amounts of low precession computing in order to train models in a reasonable amount of time. Intel is offering a range of hardware products to meet these computing demands by accelerating low precision performance. In addition, Intel is optimizing the open source frameworks used for deep learning to run on Intel technologies while also investing in ecosystem adoption.

Marc Hamilton
NVIDIA
VP of Solutions Architecture & Engineering
hamilton.pdf
File Size: 2056 kb
File Type: pdf
Download File

AI and The GPU Ready Data Center

​The tremendous growth of artificial intelligence applications and services across industries is driving a new computing model for AI based on GPU computing. The parallel computing performance of GPUs make it possible to train deep neural networks in hours vs weeks and allows you to deploy AI inference capabilities across an ever increasing number of applications. GPU accelerated servers not only make AI possible but dramatically reduce the cost of computing vs traditional CPU-only servers. To gain the maximum benefits of GPU computing you need a GPU ready data center. This white paper discusses some of the key considerations necessary to prepare your datacenter for GPU computing particularly the widespread deployment of AI applications across your organization. 
Bryan Varble
Mellanox
Staff Architect
varble.pdf
File Size: 7166 kb
File Type: pdf
Download File

Next Generation Smart Interconnect for Machine Learning 

​Advancements in machine learning and their enabling frameworks require the elements of extreme-scale very similar in capabilities and architecture of today’s supercomputers. We will discuss the latest capabilities of the industry’s leading intelligent interconnect devices that leverage RDMA, in-network processing and a more effective mapping of communication to maximize the performance of DNN training. We will also cover the next generation capabilities targeted for distributed machine learning as we prepare to move towards HDR 200Gb/s and approach the next milestones for cognitive computing solutions.
Antonio Cisternino
University of Pisa
Researcher
cisternino.pdf
File Size: 3584 kb
File Type: pdf
Download File

Deep and Machine Learning @UNIPI

​Presentation will cover some informations on present and future of Deep and Machine Learning Activities at University of Pisa. 
George Turner
Indiana University
Lead Systems Programmer
and
J. Michael Lowe
Indiana University
​
Lead Systems Programmer
turner.pdf
File Size: 24385 kb
File Type: pdf
Download File

Jetstream, New Ventures in Research, Engineering, and Educational Computing

​Jetstream is the National Science Foundation's (NSF) first production cloud
resource designed to deliver compute services and programming models of use to researchers, engineers, and educators with a focus on those working in the "long tail of science."  Jetstream's initial focus is as an accessable and easy to use resource for domain specific software developers and their users.
​
Jetstream has also proven to be an extremely attractive platform for software
and science gateways developers looking to employ new cloud computing tools and techniques that are not practical with traditional high performance computing systems. In this talk, we will focus mainly on the technical details involved in designing, configuring and deploying the compute, networking and OpenStack cloud infrastructure. We will also describe interesting use cases that utilize the easy to use Atmosphere web based user interface and ways that developers are expanding the horizons of research computing.

John Taylor
University of Cambridge

SKA-SDP and OpenStack

The SKA is a next generation radio telescope requiring  massive amounts of processing to generate science data products useable by Astronomers around the world. The Science Data Processor fulfils this role. The SDP will be situated ~1000km away from field collectors and is required to support of the order of 1Tbyte/sec of ingest coupled to 100s of Pflops of compute. This talk will discuss prototyping activities concerning the use of OpenStack in this context. 
​
Joseph George 
SUSE
VP of Strategy
george.pdf
File Size: 1904 kb
File Type: pdf
Download File

HPC + OpenStack = HPCaaS (HPC-as-a-Service)

While many traditional, compute-intensive HPC users are accustomed to the
need for highly tuned, bare-metal access to compute nodes, there are still
classes of HPC workloads that are suitable for public or private clouds. In
addition, many early or late life cycle needs for such HPC jobs may also be
natural fits for a cloud environment. As the private cloud, typically based
upon OpenStack, has matured more of the performance-specific attributes,
like low-latency networking, parallel file systems and attribute pass-through
can be incorporated into a deployment to help handle more HPC use cases. Couple this with orchestration technologies, like Heat or Magnum, and you have a way to spin up mini-HPC clusters upon demand, within a cloud, to offer HPCaaS allowing better utilization of your compute, storage and networking resources.
​
Jay Boisseau
Dell EMC
Chief HPC Strategist
boisseau_v2.pdf
File Size: 2635 kb
File Type: pdf
Download File

Jeff Kirk
Dell EMC
Server CTO Office
kirk.pdf
File Size: 1436 kb
File Type: pdf
Download File

Onur Celebioglu
Dell EMC
HPC Engineering Director
celebioglu.pdf
File Size: 3068 kb
File Type: pdf
Download File


Community Partners
​Spring 2017

Picture
Picture
 Dell Privacy Policy                   Terms of Use                © COPYRIGHT 2019. ALL RIGHTS RESERVED.