https://doi.org/10.1051/epjconf/201921403006
Improving efficiency of analysis jobs in CMS
1
University of Sofia,
Sofia,
Bulgaria
2
Università e INFN Trieste,
Trieste,
Italy
3
University of Notre Dame,
Notre Dame,
IN. USA
4
University of California San Diego,
La Jolla, CA,
USA
5
Centro de Investigaciones Energéticas Medioambientales y Tecnológicas (CIEMAT),
Madrid,
Spain
6
Port d’Informació Científica (PIC),
Barcelona,
Spain
7
INFN Bari,
Bari,
Italy
8
Università e INFN Perugia,
Perugia,
Italy
9
California Institute of Technology,
California,
USA
10
University of Nebraska-Lincoln,
Lincoln,
NE, USA
11
Benémerita Universidad Autónoma de Puebla,
Puebla,
México
* Corresponding author, e-mail: todor.trendafilov.ivanov@cern.ch
Published online: 17 September 2019
Hundreds of physicists analyze data collected by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider using the CMS Remote Analysis Builder and the CMS global pool to exploit the resources of the Worldwide LHC Computing Grid. Efficient use of such an extensive and expensive resource is crucial. At the same time, the CMS collaboration is committed to minimizing time to insight for every scientist, by pushing for fewer possible access restrictions to the full data sample and supports the free choice of applications to run on the computing resources. Supporting such variety of workflows while preserving efficient resource usage poses special challenges. In this paper we report on three complementary approaches adopted in CMS to improve the scheduling efficiency of user analysis jobs: automatic job splitting, automated run time estimates and automated site selection for jobs.
© The Authors, published by EDP Sciences, 2019
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.