29 novembre 2019 au CNRS, rue Michel-Ange, Paris. i
Software and applications have a much longer lifespan than hardware technologies and supercomputers. The typical time window for a supercomputer is 6 years, while some applications are used over several decades both in fundamental research (climate models, fusion codes, …) and industry (meteo, aeronautics, car industry,…).
How can this mismatch be resolved ? Computational scientists want both to attain the best performance on high-end machines, while maintaining their well-tested and validated code bases over several architectural generations. This issue is of particular importance with the foreseen revolution in hardware technology in the coming decade.
The Forum will discuss several ways to compromise between these two often conflicting goals.
Introduction
- Introduction (15″) : Denis Veynante (CNRS, Paris)
9:30 – 10:15 What We Did for Co-Design in Development of “Fugaku”
Mitsuhisa Sato, Deputy Project Leader and Team Leader, Architecture Development Team, FLAGSHIP 2020 Project, RIKEN Center for Computational Science
Abstract
We have been carrying out the FLAGSHIP 2020 to develop the Japanese next-generation flagship supercomputer, Post-K, named as “Fugaku” recently. In the project, we have designed our original processor based on Armv8 with SVE as well as the system including interconnect with the industry partner, Fujitsu. In the design of the system, the “co-design” with the system and applications is a key to make it efficient and high-performance. We analyzed a set of the target
applications provided from applications teams for the design of the processor architecture and the decision of many architectural parameters. In this talk, what we did for the co-design of “Fugaku” will be presented as well as the overview of the system.10:15 – 10:45 Preparing Applications for Exascale and Beyond: Initial Lessons From the Exascale Computing Project
Dr. Erik W. Draeger, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
Abstract
As the high performance computing (HPC) landscape continues to rapidly evolve, developing applications that can effectively utilize the largest available machines is becoming increasingly challenging. The Exascale Computing Project (ECP), launched in 2016 and funded by the US Department of Energy (DOE), is an aggressive research, development, and deployment project focused on delivery of mission-critical applications at exascale. The ECP application portfolio covers a broad range of scientific domains and methods, from quantum chromodynamics to predictive wind farm modeling to seismic hazard assessment. In this talk, I will describe the common challenges faced by computational scientists as they prepare their applications to use the next generation of supercomputers and the strategies they are using to overcome them. Strategies for developing portable, performant code will be discussed and early examples of reexamining traditional algorithms and methods will be described.
Bio
Erik Draeger is the Deputy Director of Application Development for the Exascale Computing Project, as well as the High Performance Computing group leader at the Center for Applied Scientific Computing (CASC) at Lawrence Livermore National Laboratory. He received a PhD in theoretical physics from the University of Illinois, Urbana-Champaign in 2001 and has over a decade of experience developing scientific applications to achieve maximum scalability and time to solution on next-generation architectures.
10:45 – 11:15 Coffee break
11:15 – 11:45 Dernières informations concernant EuroHPC
Laurent Crouzet (Digital Services and Infrastructures, MESRI)
Bio
Dr Laurent Crouzet is currently Head of the “Digital Services and Infrastructures” Department at the General Directorate for Research and Innovation of the French Ministry for Higher Education, Research and Innovation. He is in charge of the strategy on HPC and e-Infrastructures. He is the French representative in the EuroHPC Governing Board and in the EOSC Governing Board, and he serves as French delegate to the European e-Infrastructures Reflection Group (e-IRG).
Before joining the French Ministry for Research, Dr Laurent Crouzet was in charge of coordinating the HPC activities of the Physical Science Division of CEA. He has first served in the Defense Division of CEA, and he has joined the Physical Science Division in 2006, as an Assistant to the Director, in charge of HPC and computing sciences.
Dr Crouzet holds a PhD in Applied Mathematics delivered in 1994 by Pierre and Marie Curie University (Paris VI), and a Master Degree in Applied Mathematics, with a “Parallel Computing” mention.11:45 – 12:15 The European Centers of Excellence
Edouard Audit (CEA)
Abstract
The European HPC strategy relies on three pillars: infrastructure, to provide European researchers with world class supercomputers, technology to develop independent European HPC system supply and applications in order to maintain European excellence in HPC applications and widen their usage.
The application pillar is mainly supported by CoE, centres of Excellence. Ten of these centers of excellence have been established in the framework of the H2020 program. This talk will present these centres, their scientific objectives, they way they plan to tackle the exascale challenge and the scientific benefits to be expected.
Bio
Edouard Audit studied physics at the Ecole Normale Supérieure de Cachan and received a PhD in theoretical physics from university Paris 7 in 1997. He worked at the Paris observatory in the field of cosmology and large scale structure formation. He then joined CEA where he has been working on star formation, interstellar medium and laser generated plasmas. In 2010, he was appointed as the founding director of Maison de la Simulation, a joint laboratory between CEA, CNRS, and the universities of Paris-Sud and Versailles-St Quentin. Edouard is also professor at INSTN and teaches computational astrophysics and parallel computing. He is the coordinator of the EU-funded Energy Oriented Centre of Excellence – EoCoE – and the chairman of the HPC CoE Council (HPC3) which gather representative of all the European HPC centres of excellence.
12:15 – 12:45 Hardware Panorama & Legacy Codes
Guillaume Colin de Verdière (CEA)
Abstract
In this talk, I will show that technology evolution will force changes in the way of programming scientific codes. I will explain that the power, scalability and memory walls are strong drivers to change development practices for very practical reasons.
From those findings, I will describe their impacts on legacy codes. I will first define what are legacy codes and then infer what needs to be done on those, based on the conclusions of the first part.
Bio
Guillaume Colin de Verdière, PhD, International Expert at CEA (Commissariat à l’énergie atomique et aux énergies alternatives).
Guillaume graduated from ENSCP (Chimie ParisTech) in 1984 and got his PhD in 2019. He has been a CEA staff member since 1985. After contributing to large scientific codes for 7 years, he has been deeply involved in post-processing software with a focus on visualization and I/O. Since 2008, his activities are on investigating new architectures for next generations of supercomputers and foresee the impact of such architectures on scientific codes.
12:45 – 14:00 Lunch
14:00 – 14:15 Présentation du C3I
Michel Kern (Inria)
14:15 – 14:45 Bridging the software and performance gap to exascale for weather and climate simulations
Thomas Schulthess (ETH Zürich, Switzerland)
Abstract
During the early 2020s, the fastest supercomputers will reach exaflops-scale performance. The most affordable exascale systems will be GPU accelerated with offerings from multiple vendors. At the same time, there are many open questions on how the programming models will evolved in the coming decade. In this talk I will discuss software strategies for weather and climate codes to take advantage of these new architectures. At the same time, we will look at strategies to overcome the performance gap for global simulation with 1km horizontal resolution and decent throughput.
Bio
Thomas Schulthess holds a chair for computational physics at ETH Zurich and directs the Swiss National Supercomputing Centre (CSCS) in Lugano since fall of 2008. He received his PhD in 1994 from ETH Zurich and spent many years at Oak Ridge National Laboratory. While his primary research is on computational methods for materials science, as CSCS Director he has taken interest in developing energy efficient computing systems for climate modelling and meteorology. Thomas led the teams that won the ACM Gordon Bell Prizes in 2008 and 2009 with the first production-level applications that sustained a petaflop. Under his leadership CSCS and MeteoSwiss introduced the first GPU accelerated systems for weather forecasting that went operational in 2016, and for which they received the Swiss ICT Award for outstanding IT-based projects and services.
14:45 – 15:15 Development of an Exascale-Ready Fluid Dynamics Solver with the Deal.II Library
Martin Kronbichler (TUM, Münich, Germany)
Abstract
My talk will focus on the software design decisions we made when developing a new solver for transitional and turbulent fluid flow in engineering and biomedical applications. Algorithmically, the solver uses time integrators from the BDF and Runge–Kutta families and high-order discontinuous Galerkin finite element discretizations in space. For higher order methods applied on contemporary hardware on the threshold to the exascale, matrix-free evaluation of discrete operators is around an order of magnitude faster than sparse matrix algebra and thus essential. Matrix-free algorithms however go against common practice in PDE software design because the implementation of finite element components must be integrated into the linear solvers, rather than outsourcing all linear algebra to external packages. Nonetheless, generic software design is possible by providing mathematical descriptions of discretized finite element operators in an abstract way. I will present the design in the generic library deal.II, which requires the application code to only provide an implementation of the operation at quadrature points of cell and face integrals. The numerical quadrature as well as loop over the mesh can be provided by the library, including the MPI ghost communication and parallelization strategies.
This library-based setup loosens the interaction between the development of the application software and the hardware specifics, and lets the application code focus on adding interesting physical models or stable mathematical formulations. At the same time, the deal.II library and its various third-party packages allow us to leverage use NVIDIA GPUs and CPUs, as well as today’s SIMD vectorization schemes, with an efficiency close to the hardware limits. Finally, the development and maintenance of an exascale-ready infrastructure can be shared over a much larger set of applications and researchers.Bio
Dr. Martin Kronbichler is a senior researcher at the Institute for Computational Mechanics, Technical University of Munich, Germany. He is a principal developer of the deal.II finite element library and leads the high-order as well as high-performance activities in this project. His research focus is on efficient high-order discontinuous Galerkin schemes with matrix-free implementations as well as fast iterative solvers in the context of computational fluid dynamics. For these efforts, he has received funding as a principal investigator in the exascale project ExaDG within the German priority program SPPEXA and other sources.
15:15 – 15:45 The Parallel Programming Osmotic Membrane
Jesus Labarta (UPC)
Abstract
The talk will present a personal vision of the history and frequent practices in programming HPC platforms and how I believe this should evolve in the future.
A first message of the talk is the importance of decoupling the programmers from the details of the architectures through the programming model interface. This is in analogy to an osmotic membrane through which sufficient information can be conveyed without really requiring the programmer to be exposed and having to control all the details of the system.
A second important message of the talk is that the challenge towards exascale is not only on the detailed technologies and programming models that may appear, but very importantly on a programmer mind-set change from a dominating resource oriented and latency limited view of programming to an asynchronous and throughput oriented view of programming.
I will present some examples of how I believe this vision for minimizing development efforts, maximizing maintainability of the codes and getting performance portability can be achieved based on the task based model developments at BSC.
Bio
Professor Jesús Labarta received a B.S. in Telecommunications Engineering from the Technical University of Catalunya (UPC) in 1981 and his Ph.D. in Telecommunications Engineering also from UPC in 1983. He has been a full professor of Computer Architecture at UPC since 1990 and was Director of CEPBA-European Center of Parallelism at Barcelona from 1996 to 2005.
Since its creation in 2005, he has been the Director of the Computer Sciences Research Department within the Barcelona Supercomputing Center (BSC).
During his 35-year academic career, Prof. Labarta has made significant contributions in programming models and performance analysis tools for parallel, multicore and accelerated systems, with the sole objective of helping application programmers to improve their understanding of their application’s performance and to improve programming productivity in the transition towards very large-scale systems. Under his supervision, his research team has been developing performance analysis and prediction tools (Paraver and Dimemas) and pioneering research on how to increase the intelligence embedded in these performance tools.
He has also been a driving force behind the task-based StarSs programming model, which gives runtime systems the required intelligence to dynamically exploit the potential parallelism and resources available. His team has influenced the evolution of the OpenMP standard with the OmpSs instantiation of StarSs, and, in particular, its tasking model.
He has constantly tried to incorporate his vision and ideas into industrial collaborations. These include projects partially funded by the European Commission, or with HPC companies.
Currently Prof. Labarta is the leader of the Performance Optimization and Productivity (POP) EU Center of Excellence where more than 100 users (both academic and SMEs) from a very wide range of application sectors receive performance assessments and suggestions for code refactoring efforts.
He has authored a large number of publications in peer-reviewed conferences and journals and has advised dozens of PhD students.
In Nov 2018, he was awarded by The Association for Computing Machinery (ACM) and IEEE Computer Society (IEEE CS) with the ACM-IEEE CS Ken Kennedy Award for his seminal contributions to programming models and performance analysis tools for high performance computing. He’s the First European Researcher receiving this award.
15:45 – 16:15 Coffee break
16:15 – 17:30 Table ronde: quelle feuille de route pour aller vers les nouvelles architectures ?
Animation : Edouard Audit
Participants: Erik Draeger, Virginie Grandgirard, Mitsuhisa Sato, Thomas Schulthess, Gabriel Staffelbach, Isabelle Terasse.
Bio
Gabriel Staffelbach est un chercheur du CERFACS, Centre Européen de Recherche et de Formation au Calcul Scientifique. Il anime depuis plus d’une décennie la veille technologique autour du code de mécanique des Fluides AVBP. Ce code communautaire à l’état de l’art du Calcul Haute performance est utilisé en recherche et industrie.
Gabriel Staffelbach is a researcher from CERFACS, Centre Européen de Recherche et de Formation au Calcul Scientifique. For the past decade he has spearheaded the technology watch for the fluid mechanics code AVBP. This community code at the state of the art of High Performance computing is used in both academia
and industry.