Abstract:We describe a software package, TomOpt, developed to optimise the geometrical layout and specifications of detectors designed for tomography by scattering of cosmic-ray muons. The software exploits differentiable programming for the modeling of muon interactions with detectors and scanned volumes, the inference of volume properties, and the optimisation cycle performing the loss minimisation. In doing so, we provide the first demonstration of end-to-end-differentiable and inference-aware optimisation of particle physics instruments. We study the performance of the software on a relevant benchmark scenarios and discuss its potential applications.
Abstract:In this community review report, we discuss applications and techniques for fast machine learning (ML) in science -- the concept of integrating power ML methods into the real-time experimental data processing loop to accelerate scientific discovery. The material for the report builds on two workshops held by the Fast ML for Science community and covers three main areas: applications for fast ML across a number of scientific domains; techniques for training and implementing performant and resource-efficient ML algorithms; and computing architectures, platforms, and technologies for deploying these algorithms. We also present overlapping challenges across the multiple scientific domains where common solutions can be found. This community report is intended to give plenty of examples and inspiration for scientific discovery through integrated and accelerated ML solutions. This is followed by a high-level overview and organization of technical advances, including an abundance of pointers to source material, which can enable these breakthroughs.
Abstract:Between the years 2015 and 2019, members of the Horizon 2020-funded Innovative Training Network named "AMVA4NewPhysics" studied the customization and application of advanced multivariate analysis methods and statistical learning tools to high-energy physics problems, as well as developed entirely new ones. Many of those methods were successfully used to improve the sensitivity of data analyses performed by the ATLAS and CMS experiments at the CERN Large Hadron Collider; several others, still in the testing phase, promise to further improve the precision of measurements of fundamental physics parameters and the reach of searches for new phenomena. In this paper, the most relevant new tools, among those studied and developed, are presented along with the evaluation of their performances.
Abstract:Matrix inversion problems are often encountered in experimental physics, and in particular in high-energy particle physics, under the name of unfolding. The true spectrum of a physical quantity is deformed by the presence of a detector, resulting in an observed spectrum. If we discretize both the true and observed spectra into histograms, we can model the detector response via a matrix. Inferring a true spectrum starting from an observed spectrum requires therefore inverting the response matrix. Many methods exist in literature for this task, all starting from the observed spectrum and using a simulated true spectrum as a guide to obtain a meaningful solution in cases where the response matrix is not easily invertible. In this Manuscript, I take a different approach to the unfolding problem. Rather than inverting the response matrix and transforming the observed distribution into the most likely parent distribution in generator space, I sample many distributions in generator space, fold them through the original response matrix, and pick the generator-level distribution that yields the folded distribution closest to the data distribution. Regularization schemes can be introduced to treat the case where non-diagonal response matrices result in high-frequency oscillations of the solution in true space, and the introduced bias is studied. The algorithm performs as well as traditional unfolding algorithms in cases where the inverse problem is well-defined in terms of the discretization of the true and smeared space, and outperforms them in cases where the inverse problem is ill-defined---when the number of truth-space bins is larger than that of smeared-space bins. These advantages stem from the fact that the algorithm does not technically invert any matrix and uses only the data distribution as a guide to choose the best solution.
Abstract:For data sets populated by a very well modeled process and by another process of unknown probability density function (PDF), a desired feature when manipulating the fraction of the unknown process (either for enhancing it or suppressing it) consists in avoiding to modify the kinematic distributions of the well modeled one. A bootstrap technique is used to identify sub-samples rich in the well modeled process, and classify each event according to the frequency of it being part of such sub-samples. Comparisons with general MVA algorithms will be shown, as well as a study of the asymptotic properties of the method, making use of a public domain data set that models a typical search for new physics as performed at hadronic colliders such as the Large Hadron Collider (LHC).