The AMVA4NewPhysics work package I am involved in is related to developing tools for recasting new physics searches, with a particular focus on multivariate analyses. I would like to explain a little more about it in this article.
Let me begin by describing the motivation to look for new physics at the Large Hadron Collider (LHC). Despite the fact that the Standard Model (SM) of particle physics is very successful at describing most properties of elementary particles, there are many reasons to believe that nature is much more complicated and there is new physics, or physics beyond the Standard Model (BSM). Firstly, there is the hierarchy problem that asks why the electroweak scale or the Higgs mass, 125 GeV can be so light compared to the cutoff scale of the SM, the Planck scale if there is no BSM physics, which is 17 orders of magnitude larger than the electroweak scale. The Higgs mass is unprotected by any symmetry in the SM, and the quantum corrections to its mass are proportional to the cutoff scale. One requires an extreme fine tuning to keep the Higgs boson light while renormalizing the quantum effects, which seems to be totally unnatural. Secondly, as explained previously, there exists dark matter, which is thought by many to be a new type of elementary particle. For these and many other motivations, people are excited at the prospects of discovering BSM physics at the LHC.

 

fig_01a
Supersymmetric production of neutralinos by squark decays in a proton-proton interaction

How do experimentalists analyze the LHC data? In a typical ATLAS or CMS analysis, for example the search for squarks (supersymmetric partner of SM quarks) by the ATLAS collaboration, one does not really “see” the squarks at the LHC, but the decay products of them. In the example above, experimentalists limit themselves to events where there are jets and a large transverse energy (originated from neutralino, a type of supersymmetric particle invisible to the detector) in the final state. Then, analyses are performed to determine whether the observed events are originated from known SM physics (background), or from some new physics (signal). The result of these analyses is interpreted in terms of statistical limit/constraint on the parameter of the model (squark mass in this example).

Typically, experimentalists utilize two approaches to interpret the LHC data. The first approach is investigating a limited number of parameters by considering a constrained version of the full model. For example, one can consider the experimental constraints on the constrained Minimal Supersymmetric Standard Model (CMSSM) which contains 5 free parameters (in contrast with the full Minimal Supersymmetric Standard Model, which has more than 100 free parameters!). However, the interpretation often becomes difficult when one (especially theorist with new ideas or models) wishes to consider even a slightly modified version of the model. Therefore, this approach only allows a limited class of models to be tested.
The second approach employed is the so-called simplified model approach, where only a small number of particles and interactions are considered while interpreting the LHC data. Limits are often presented by assuming a certain production and decay topology of particles. For example, limits can be placed on squarks assuming squarks are pair-produced via QCD interaction and decay with 100% branching ratio to quark and a neutralino (see figure). This approach aims to impose experimental constraints on a wide class of models. The shortcoming of this approach is that most realistic models have more complicated interactions that are not covered by the simplified models. Using this result may lead to an inappropriate evaluation of the realistic model of interest.
The aim of recasting LHC analyses is to avoid the shortcomings mentioned above and study the experimental limits of models not covered by the experimentalists. To do this, experimental analyses have to be implemented by generating events, simulating detector effects and performing statistical analyses. By doing so, one can study how a certain LHC analysis constrains in principle any model of interest. Furthermore, as any model of new physics will generically have multiple distinctive predictions of collider signatures which have to be cross-checked when contemplating the validity of the model, recasting also allows one to do this kind of checks flexibly. You can find some of my previous works which more or less follow this philosophy here, here, and here. Within this network, I am particularly interested in recasting multivariate analyses at the LHC.

That is all for this article. Let me know your thoughts in the comment section!