https://doi.org/10.1051/epjconf/202429504028
Job CPU Performance comparison based on MINIAOD reading options: Local versus remote
George W. Downs Laboratory of Physics and Charles C. Lauritsen Laboratory of High Energy Physics 1200 E California Blvd Pasadena, California 91125
* e-mail: jbalcas@caltech.edu
** e-mail: newman@hep.caltech.edu
*** e-mail: preeti@caltech.edu
**** e-mail: suppalap@caltech.edu
† e-mail: amoya@caltech.edu
‡ e-mail: catalinn.iordache@gmail.com
§ e-mail: raimis.sirvis@gmail.com
Published online: 6 May 2024
A critical challenge of performing data transfers or remote reads is to be as fast and efficient as possible while, at the same time, keeping the usage of system resources as low as possible. Ideally, the software that manages these data transfers should be able to organize them so that one can have them run up to the hardware limits. Significant portions of LHC analysis use the same datasets, running over each file or dataset multiple times. By utilizing "ondemand" based regional caches, we can improve CPU Efficiency and reduce the wide area network usage. Speeding up user analysis and reducing network usage (and hiding latency from jobs by caching most essential files on demand) are significant challenges for HL-LHC, where the data volume increases to an exabyte level. In this paper, we will describe our journey and tests with the CMS XCache project (SoCal Cache), which will compare job performance and CPU efficiency using different storage solutions (Hadoop, Ceph, Local Disk, Named Data Networking). It will also provide insights into our tests over a wide area network and possible storage and network usage savings.
© The Authors, published by EDP Sciences, 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.