https://doi.org/10.1051/epjconf/201921407005
Building and using containers at HPC centres for the ATLAS experiment
1
Duke University,
Durham, NC,
USA
2
Argonne National Lab,
Lemont, IL,
USA
3
University of Illinois, Urbana-Champaign,
Champaign, IL,
USA
4
University of Texas, Arlington,
Arlington, TX,
USA
5
Brookhaven National Lab,
Upton, NY,
USA
6
Lawrence Berkeley National Lab,
Berkeley, CA,
USA
7
SLAC National Accelerator Lab,
Menlo Park, CA,
USA
* e-mail: yangw@slac.stanford.edu
Published online: 17 September 2019
The HPC environment presents several challenges to the ATLAS experiment in running their automated computational workflows smoothly and efficiently, in particular regarding issues such as software distribution and I/O load. A vital component of the LHC Computing Grid, CVMFS, is not always available in HPC environments. ATLAS computing has experimented with all-inclusive containers, and later developed an environment to produce such containers for both Shifter and Singularity. The all-inclusive containers include most of the recent ATLAS software releases, database releases, and other tools extracted from CVMFS. This helped ATLAS to distribute software automatically to HPC centres with an environment identical to those in CVMFS. It also significantly reduced the metadata I/O load to HPC shared file systems. The production operation at NERSC has proved that by using this type of containers, we can transparently fit into the previously developed ATLAS operation methods, and at the same time scale up to run many more jobs.
© The Authors, published by EDP Sciences, 2019
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.