15 mars 2024, CNRS Michel-Ange, Paris
Ce forum aborde les questions de reproductibilité des simulations scientifiques, en examinant ses différents aspects, notamment les méthodes de documentation des processus, la standardisation des environnements de calcul, et l’importance de partager les données et les codes sources pour assurer une vérification et une réplication précises des résultats.
9h15 Accueil (Michel Kern)
9h30-10h15 : Better Reproducibility: We do not want it, cannot afford it, but still need it and can have it (Mike Héroux, Sandia)
Abstract
Scientific progress relies on both trustworthy and trusted results, requiring that we address technical and human concerns. Fundamental to trust is reproducibility.
The scientific community has long acknowledged that the reproducibility of our computational results still needs improvement, even as computational tools to support reproducibility have improved dramatically. We have certainly seen progress in valuing and producing reproducible results, but it is fair to ask, “Why is our progress so slow?”
In this presentation, we briefly sketch some of the current efforts and progress made in improving reproducibility, highlighting some of the specific challenges and approaches for high-performance computing. We then move on to focus on human-centered challenges and approaches that we believe are essential to address if we are to accelerate our progress toward improving reproducibility. Specifically, we will explore how our incentive and reward systems discourage improvement in reproducibility and how leveraging ideas from cognitive, social, and economic sciences can inform our activities and accelerate progress. We end the presentation with ideas for concrete next steps.
Biography
Michael (Mike) Heroux is a Senior Scientist at Sandia National Laboratories, Director of Software Technology for the US Department of Energy Exascale Computing Project (ECP), and Scientist in Residence at St. John’s University, MN USA. His research interests include all aspects of scalable scientific and engineering software for new and emerging parallel computing architectures.
He is the founder of the Trilinos scientific libraries, Kokkos performance portability, Mantevo miniapps, and HPCG Benchmark projects, and leads the Extreme-scale Scientific Software Stack (E4S) project in DOE, a curated collection of HPC software libraries and tools. He is also the PI of the PESO software-ecosystem stewardship and advancement project focused on post-ECP scientific software efforts.
Mike has led community projects to improve reproducibility for the past 15 years, including establishing the Reproducible Computation Results initiative as Editor-in-Chief of ACM Transactions on Mathematical Software, bootstrapping and establishing the Supercomputing Conference Series reproducibility initiative, driving the effort to align ACM reproducibility terminology to match definitions of the broader community, and establishing the NISO standard for reproducibility terminology and badging. He is also the lead author of the NSF Advisory Committee on Cyberinfrastructure report on Trustworthy Computations.
Mike is a Fellow of the Society for Industrial and Applied Mathematics (SIAM), a Distinguished Member of the Association for Computing Machinery (ACM), and a Senior Member of IEEE. He serves on the ACM Publications Board and is chair of the ACM New Publications Committee.
10h15-10h30 : Présentation du réseau reproductibilité (Arnaud Legrand, CNRS/LIG)
10h30 – 11h00 Pause
Session Environnements logiciels (chair Denis Veynante)
11h-11h30 : Reproducibility and Performance: Why Choose? (Ludovic Courtès, Inria)
Abstract
High-performance computing (HPC) is often seen as antithetical to “reproducibility”: one would have to choose between software that achieves high performance and software that can be deployed in a reproducible fashion.
By giving up on reproducibility, we would give up on verifiability, a foundation of the scientific process. How can we conciliate performance and reproducibility? Engineering work that has gone into performance portability has already proved fruitful.
This talk will debunk common misconceptions regarding performance in HPC. We will show how GNU Guix, a software deployment tool and GNU/Linux distribution designed with reproducibility in mind, lets users deploy software optimized for the target machines while preserving provenance tracking and reproducibility.
Bio
Research software engineer at Inria in Bordeaux, France, I have been working with demanding HPC practitioners who want it all: performance, flexibility, and reproducibility. I founded Guix in 2012, soon joined by an ever-growing team of contributors, and co-founded the Guix-HPC effort in 2017. Guix has proved to be a great tool to try and satisfy those seemingly mutually-exclusive HPC needs. I am a member of the Software chapter of the French Comité pour la science ouverte (Open Science Committee).
11h30 - 12h00 : Reproductibilité et modèles de programmation : MPI et OpenMP (Hugo Taboada, LIHPC/CEA)
Abstract
Reproducibility is an important aspect of high-performance computing (HPC). When using MPI and OpenMP programming models, there are a few key practices to follow to ensure the reproducibility of results when needed. We will introduce how the MPI standard specifies the reproducibility. Then, we show the sources of non-determinism in MPI and explain how to handle the reproducibility aspect. Then, we focus on the OpenMP aspect of reproducibility management. To conclude, we discuss reproducibility usage in HPC.
Résumé
La reproductibilité est un aspect important du calcul haute performance (HPC). Lors de l’utilisation des modèles de programmation MPI et OpenMP, quelques pratiques clés doivent être suivies pour garantir la reproductibilité des résultats en cas de besoin. Nous présenterons comment la norme MPI spécifie la reproductibilité. Ensuite, nous montrerons les sources de non-déterminisme dans MPI et expliquerons comment gérer la reproductibilité des résultats. Ensuite, nous nous concentrerons sur la gestion de la reproductibilité avec OpenMP. Pour conclure, nous discuterons de l’utilisation de la reproductibilité en HPC.
Bio
Hugo Taboada is a research scientist at CEA. He received his Ph.D. in computer science from the University of Bordeaux in 2018, and he joined CEA the same year. He is now working to help applications benefit from architecture and runtime specificities. His topics of interest are scheduling, thread placement, communication overlap, performance projection, and Domain Specific Language interaction with MPI runtimes. He also participates in the MPI Forum, helping to design the next MPI standard.
Hugo Taboada est ingénieur-chercheur au CEA. Il reçoit sont doctorat en informatique de l’université de Bordeaux en 2018 et rejoint le CEA la même année. Son travail consiste à aider les applications à bénéficier des spécificités des architectures et des supports exécutifs. Ses thématiques de recherche sont l’ordonnancement, le placement de threads, le recouvrement des communications, la projection de performance et l’interaction des DSL avec les runtimes. Il participe aussi au MPI Forum, aidant à l’élaboration du prochain standard.
12h00 - 12h30 : Floating-Point Determinism, Reproducibility & Accuracy in HPC (David Defour, Université de Perpignan)
Abstract
The manipulation of floating-point numbers is at the heart of numerical software. The numerical credibility of such software depends on the accuracy of the results and their reproducibility over time and across computing environments. In this talk we will discuss how these properties are affected not only by the method used, but also by rounding errors, processor, compiler, operating system and programming language.
Bio
David Defour is a professor in computer science at the University of Perpignan. He received his PhD in 2003 from ENS Lyon working on computer arithmetic and computer architecture. For the past 20 years, he has been working on developing solutions for « unconventional » arithmetic’s targeting multicore architecture and more specifically GPUs.
12h30 - 12h45 : Point Europe (Jean-Philippe Nominé, CEA)
12h45 - 14h00 : Déjeuner au self du CNRS
Session Enjeux pour les publications (chair Violaine Louvet)
14h00 - 14h30 : Problèmes de reproductibilité et incitations au changement. (Arnaud Legrand, CNRS/LJK)
Résumé
Sous le terme « reproductibilité de la recherche » se cachent plusieurs problèmes assez différents qui appellent des réponses tout aussi différentes. J’essaierai de clarifier les différentes problématiques liées spécifiquement au calcul et comment certains outils peuvent aider, puis comment le processus de relecture des publications a évolué ces dernières années pour inciter à améliorer les pratiques.
Bio
Arnaud Legrand is a senior CNRS researcher at University Grenoble-Alpes since 2004. His research interests encompass the study of large scale distributed computing systems, their optimization (scheduling, combinatorial optimization, and game theory), and performance evaluation (trace analysis, performance prediction, statistical learning), in particular through simulation. He is one of the core developers of the SimGrid project and he is involved in the promotion of better research practices and methods, in particular through the MOOC « Reproducible Research: methodological principles for
a transparent science ».14h30 - 15h00 : Practical Approaches to Reproducibility in the Era of Rapid Scientific Evolution: An Example with Environmental Data Science (Anne Fouilloux, Simula Research Laboratory)
AbstractIn an era marked by the rapid evolution of scientific methodologies, the need to facilitate take up of research outcomes to trigger innovation is becoming imperative and with it the reproducibility of research has become paramount. We discuss the tools and practices aimed at enhancing the reproducibility of scientific research. We exemplify our approach with initiatives such as the Climate Informatics Reproducibility Challenge and show how we were guided by the principles of Open Science and FAIR (Findable, Accessible, Interoperable, Reusable) research objects. We will explain our efforts to establish a robust framework for ensuring transparency and replicability in computational research within the domain of Environmental Data Science.
Session Pratiques des communautés (chair Marc Baaden)
15h00 - 15h30 : Effet papillon et reproductibilité : climat et météo sont dans un ordinateur (Olivier Marti, Jérôme Servonnat et Arnaud Caubel, IPSL/LSCE)
Résumé
Pierre Simon de Laplace expliquait au XVIIIème siècle le déterminisme en affirmant que quelqu’un doté d’une connaissance parfaite de la position et de la vitesse de tous les objets de l’Univers et des lois de la physique pourrait calculer le passé comme le futur. Edward Lorenz lui répondra au milieu du XXème par cette interrogation : « Un battement d’ailes de papillon au Brésil peut-il déclencher une tornade au Texas ? ». Il illustrait ainsi la théorie du chaos déterministe. Une erreur infinitésimale sur ces paramètres de l’état initial conduit rapidement à un résultat totalement faux. Dans le cas de la prévision météorologique, la prévisibilité du temps est ainsi limitée à quelques jours, en raison d’une connaissance imprécise des conditions initiales, mais aussi des lois d’évolution du système. Dans le cas de la prévision numérique du temps, le temps prévu est sensible à toute altération dans la façon de calculer. Un changement de calculateur, d’options de compilation, d’option de librairie, tout infime changement modifie la prévision à quelques jours.
La science du climat a une perspective différente. Il ne s’agit plus de prévoir l’état de l’atmosphère ou de l’océan dans 1 mois ou 100 ans. L’objectif est de déterminer au mieux la distribution statistiques des paramètres météorologiques, avec ses paramètres moyens et leurs variabilités sur une vaste gamme d’échelles de temps, et jusqu’à la distribution des évènements extrêmes. Si un papillon peut modifier le temps, il ne peut influer sur les grands équilibres énergétiques de la planète qui déterminent le climat. On ne se pose plus la question de la reproductibilité d’une simulation particulière, mais plutôt celle d’un ensemble de simulations. Elles ont toutes des trajectoires différentes, mais la statistique de l’ensemble est-elle stable, reproductible ?
Notons cependant que la reproductibilité d’une simulation donnée est malgré tout pour nous un objectif majeur. Régulièrement nous lançons une même simulation sur un calculateur, et surveillons que le résultat n’a pas bougé, au bit près. Si cette reproductibilité n’a guère de sens physique, elle est pour nous une assurance qualité. Si quelque chose change, il faut en comprendre l’origine, et déterminer si c’est un bug ou une feature …
Bios
Olivier Marti est chercheur au LSCE. Il étudie les paléoclimats, et la variabilité aux échelles décennales, centennales et millénaires. Il travaille sur le couplage océan-atmosphère, tant sur des aspects physiques que sur les schémas en temps. Il a dirigé pendant 12 ans l’équipe Calculs Scientifique du LSCE, à l’époque les modèles de climats ont migré des calculateurs vectoriels vers le massivement parallèle.
Jérôme Servonnat est chercheur au LSCE. Il travaille sur la modélisation du climat, l’évaluation des modèles et la représentativité des ensembles de simulations.
Arnaud Caubel est ingénieur en calcul scientifique au LSCE. Depuis 2010, il est co-responsable du groupe d’ingénieurs en charge des développements techniques du modèle climat de l’IPSL, du support à la communauté des utilisateurs de ce modèle et des liens avec les centres de calcul nationaux.
15h30 - 16h00 : Mini-apps an effective tool to answer to the question of reproducibility and performance (Henri Calandra, Jie Meng , Total)
Abstract
As computer programs and platforms grow more complex, it is becoming more and more difficult to fill the gap between « standard benchmark » results and how real applications perform on fully realized HPC systems.
Mini-apps application defined as self contains proxies for real applications can be seen as a desirable middle ground between benchmarks and full scale applications.
Mini-apps objectives are multiples. They can reliably predict application performance without the effort of porting the full-scale application on new systems. they must be easy to understand, deploy, modify and or rewrite. They will be useful to test and explore programming models. They will also guaranty numerical results reproducibility among the different programming models and hardware platforms.
During this talk we will present the mini-apps project we are introducing in TotalEnergies with the following objectives:
– identify technologies suitable for future production systems,
– Evaluate hardware including tuning and profiling,
– Evaluate programming models, algorithm, software stack using relevant data sets,
– Focus on numerical reproducibility and system sanity checking,
– Propose a training platform for onboarding new developers.
To illustrate the different topics, we will present preliminary results obtained from 2 mini-apps part of this project.
Bio
Henri Calandra obtained his M.Sc. in mathematics in 1984 and a Ph.D. in applied mathematics in 1987 from the University of Pays de l’Adour in Pau, France. He joined Cray Research France in 1987 and worked on seismic applications for 2 years. In 1989, he joined the Applied Mathematics Department of the French Atomic Agency. In 1990 he started working for Total SA. After 12 years of work in high performance computing and research on pre-stack deep migration as a project manager, he started and led in 2002, the geophysical research group of Total USA for 3 years in Houston. Back in France in 2005, he coordinated the Depth Imaging Research activity for TOTAL and became Expert in Numerical Algorithms and High Performance Computing for Geosciences in 2008. From January 2014 to December 2016 Henri as VP at TOTAL EP R&T USA in Houston is starting a new Research Department on Computational Science and Engineering and led the research activity on Advanced Numerical Methods and High Performance Computing for Total SA. Back in France in 2017, Henri initiated an R&D project exploring the potential and feasibility of quantum computing before starting a new R&D activity in early 2022 in CO2 storage and monitoring.
16h00 - 17h00 : Table ronde (animateurs Violaine Louvet, CNRS/LJK et Arnaud Legrand, CNRS/LIG)
La reproductibilité dans un contexte HPC est complexe car elle implique des problématiques diverses allant de la disponibilité des codes et des infrastructures, des coûts économiques et environnementaux, de la dépendance à des piles logicielles quelque fois propriétaires … L’objectif de cette table ronde est de faire une synthèse sur le sujet au regard en particulier des différents exposés de la journée.
17h00 : Fin