29 novembre 2019 au CNRS, rue Michel-Ange, Paris. i
Le formulaire d’inscription est ici
Software and applications have a much longer lifespan than hardware technologies and supercomputers. The typical time window for a supercomputer is 6 years, while some applications are used over several decades both in fundamental research (climate models, fusion codes, …) and industry (meteo, aeronautics, car industry,…).
How can this mismatch be resolved ? Computational scientists want both to attain the best performance on high-end machines, while maintaining their well-tested and validated code bases over several architectural generations. This issue is of particular importance with the foreseen revolution in hardware technology in the coming decade.
The Forum will discuss several ways to compromise between these two often conflicting goals.
9:15 – 9:30 Introduction
Maria Faury (CEA)
9:30 – 10:15 What We Did for Co-Design in Development of “Fugaku”
Mitsuhisa Sato, Deputy Project Leader and Team Leader, Architecture Development Team, FLAGSHIP 2020 Project, RIKEN Center for Computational Science
We have been carrying out the FLAGSHIP 2020 to develop the Japanese
next-generation flagship supercomputer, Post-K, named as “Fugaku”
recently. In the project, we have designed our original processor
based on Armv8 with SVE as well as the system including interconnect
with the industry partner, Fujitsu. In the design of the system, the
“co-design” with the system and applications is a key to make it
efficient and high-performance. We analyzed a set of the target
applications provided from applications teams for the design of the
processor architecture and the decision of many architectural
parameters. In this talk, what we did for the co-design of “Fugaku”
will be presented as well as the overview of the system.
10:15 – 10:45 Preparing Applications for Exascale and Beyond: Initial Lessons From the Exascale Computing Project
Dr. Erik W. Draeger, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory
As the high performance computing (HPC) landscape continues to rapidly evolve, developing applications that can effectively utilize the largest available machines is becoming increasingly challenging. The Exascale Computing Project (ECP), launched in 2016 and funded by the US Department of Energy (DOE), is an aggressive research, development, and deployment project focused on delivery of mission-critical applications at exascale. The ECP application portfolio covers a broad range of scientific domains and methods, from quantum chromodynamics to predictive wind farm modeling to seismic hazard assessment. In this talk, I will describe the common challenges faced by computational scientists as they prepare their applications to use the next generation of supercomputers and the strategies they are using to overcome them. Strategies for developing portable, performant code will be discussed and early examples of reexamining traditional algorithms and methods will be described.
Erik Draeger is the Deputy Director of Application Development for the Exascale Computing Project, as well as the High Performance Computing group leader at the Center for Applied Scientific Computing (CASC) at Lawrence Livermore National Laboratory. He received a PhD in theoretical physics from the University of Illinois, Urbana-Champaign in 2001 and has over a decade of experience developing scientific applications to achieve maximum scalability and time to solution on next-generation architectures.
10:45 – 11:15 Coffee break
11:15 – 11:45 EuroHPC Strategy for Application
Laurent Crouzet (MESRI)
11:45 – 12:15 The European Centers of Excellence
Edouard Audit (CEA)
12:15 – 12:45 Hardware Panorama & Legacy Codes
Guillaume Colin de Verdière (CEA)
12:45 – 14:00 Lunch
14:00 – 14:15 Présentation du C3I
Michel Kern (Maison de la Simulation)
14:15 – 14:45 The Need to Revisit Programming: the Example of the Meteo Swiss Model
Thomas Schulthess (ETH Zürich, Switzerland)
14:45 – 15:15 Development of an Exascale-Ready Fluid Dynamics Solver with the Deal.II Library
Martin Kronbichler (TUM, Münich, Germany)
My talk will focus on the software design decisions we made when developing a new solver for transitional and turbulent fluid flow in engineering and biomedical applications. Algorithmically, the solver uses time integrators from the BDF and Runge–Kutta families and high-order discontinuous Galerkin finite element discretizations in space. For higher order methods applied on contemporary hardware on the threshold to the exascale, matrix-free evaluation of discrete operators is around an order of magnitude faster than sparse matrix algebra and thus essential. Matrix-free algorithms however go against common practice in PDE software design because the implementation of finite element components must be integrated into the linear solvers, rather than outsourcing all linear algebra to external packages. Nonetheless, generic software design is possible by providing mathematical descriptions of discretized finite element operators in an abstract way. I will present the design in the generic library deal.II, which requires the application code to only provide an implementation of the operation at quadrature points of cell and face integrals. The numerical quadrature as well as loop over the mesh can be provided by the library, including the MPI ghost communication and parallelization strategies.
This library-based setup loosens the interaction between the development of the application software and the hardware specifics, and lets the application code focus on adding interesting physical models or stable mathematical formulations. At the same time, the deal.II library and its various third-party packages allow us to leverage use NVIDIA GPUs and CPUs, as well as today’s SIMD vectorization schemes, with an efficiency close to the hardware limits. Finally, the development and maintenance of an exascale-ready infrastructure can be shared over a much larger set of applications and researchers.
Dr. Martin Kronbichler is a senior researcher at the Institute for Computational Mechanics, Technical University of Munich, Germany. He is a principal developer of the deal.II finite element library and leads the high-order as well as high-performance activities in this project. His research focus is on efficient high-order discontinuous Galerkin schemes with matrix-free implementations as well as fast iterative solvers in the context of computational fluid dynamics. For these efforts, he has received funding as a principal investigator in the exascale project ExaDG within the German priority program SPPEXA and other sources.
15:15 – 15:45 The Parallel Programming Osmotic Membrane
Jesus Labarta (UPC)
The talk will present a personal vision of the history and frequent practices in programming HPC platforms and how I believe this should evolve in the future.
A first message of the talk is the importance of decoupling the programmers from the details of the architectures through the programming model interface. This is in analogy to an osmotic membrane through which sufficient information can be conveyed without really requiring the programmer to be exposed and having to control all the details of the system.
A second important message of the talk is that the challenge towards exascale is not only on the detailed technologies and programming models that may appear, but very importantly on a programmer mind-set change from a dominating resource oriented and latency limited view of programming to an asynchronous and throughput oriented view of programming.
I will present some examples of how I believe this vision for minimizing development efforts, maximizing maintainability of the codes and getting performance portability can be achieved based on the task based model developments at BSC.”
Professor Jesús Labarta received a B.S. in Telecommunications Engineering from the Technical University of Catalunya (UPC) in 1981 and his Ph.D. in Telecommunications Engineering also from UPC in 1983. He has been a full professor of Computer Architecture at UPC since 1990 and was Director of CEPBA-European Center of Parallelism at Barcelona from 1996 to 2005.
Since its creation in 2005, he has been the Director of the Computer Sciences Research Department within the Barcelona Supercomputing Center (BSC).
During his 35-year academic career, Prof. Labarta has made significant contributions in programming models and performance analysis tools for parallel, multicore and accelerated systems, with the sole objective of helping application programmers to improve their understanding of their application’s performance and to improve programming productivity in the transition towards very large-scale systems. Under his supervision, his research team has been developing performance analysis and prediction tools (Paraver and Dimemas) and pioneering research on how to increase the intelligence embedded in these performance tools.
He has also been a driving force behind the task-based StarSs programming model, which gives runtime systems the required intelligence to dynamically exploit the potential parallelism and resources available. His team has influenced the evolution of the OpenMP standard with the OmpSs instantiation of StarSs, and, in particular, its tasking model.
He has constantly tried to incorporate his vision and ideas into industrial collaborations. These include projects partially funded by the European Commission, or with HPC companies.
Currently Prof. Labarta is the leader of the Performance Optimization and Productivity (POP) EU Center of Excellence where more than 100 users (both academic and SMEs) from a very wide range of application sectors receive performance assessments and suggestions for code refactoring efforts.
He has authored a large number of publications in peer-reviewed conferences and journals and has advised dozens of PhD students.
In Nov 2018, he was awarded by The Association for Computing Machinery (ACM) and IEEE Computer Society (IEEE CS) with the ACM-IEEE CS Ken Kennedy Award for his seminal contributions to programming models and performance analysis tools for high performance computing. He’s the First European Researcher receiving this award.
15:45 – 16:15 Coffee break
16:15 – 17:30 Table ronde: quelle feuille de route pour aller vers les nouvelles architectures ?
Animation : Edouard Audit
Participants: Erik Draeger, Mitsuhisa Sato, Thomas Schulthess, Gabriel Staffelbach, Isabelle Terasse.
Gabriel Staffelbach est un chercheur du CERFACS, Centre Européen de Recherche et de Formation au Calcul Scientifique. Il anime depuis plus d’une décennie la veille technologique autour du code de mécanique des Fluides AVBP. Ce code communautaire à l’état de l’art du Calcul Haute performance est utilisé en recherche et industrie.
Gabriel Staffelbach is a researcher from CERFACS, Centre Européen de Recherche et de Formation au Calcul Scientifique. For the past decade he has spearheaded the technology watch for the fluid mechanics code AVBP. This community code at the state of the art of High Performance computing is used in both academia