https://doi.org/10.1051/epjconf/202429504043
The ATLAS Workflow Management System Evolution in the LHC Run3 and towards the High-Luminosity LHC era
1 Institut de Física d’Altes Energies (IFAE), The Barcelona Institute of Science and Technology, Campus UAB, 08193 Bellaterra (Barcelona), Spain
2 Port d’Informació Científica (PIC), Campus UAB, 08913 Bellaterra (Cerdanyola del Vallès), Spain
3 Brookhaven National Laboratory, 98 Rochester St, Upton, NY 11973, USA
* e-mail: pacheco@ifae.es
Published online: 6 May 2024
The ATLAS experiment has 18+ years of experience using workload management systems to deploy and develop workflows to process and to simulate data on the distributed computing infrastructure. Simulation, processing and analysis of LHC experiment data require the coordinated work of heterogeneous computing resources. In particular, the ATLAS experiment utilizes the resources of 250 computing centers worldwide, the power of supercomputing centres, and national, academic and commercial cloud computing resources. In this contribution, we present new techniques for cost-effectively improving efficiency introduced in workflow management system software. The evolution from a mesh framework to new types of computing facilities such as cloud and HPCs is described, as well as new types of production and analysis workflows.
© The Authors, published by EDP Sciences, 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.