*by Mikael Kuusela*

In an earlier blog post, Tommaso challenged me to write about a new unfolding method I’ve developed together with Philip Stark. Before describing the new method, I should first explain what the unfolding problem is and why it is a tough problem to solve.

The problem arises because of the finite resolution of LHC particle detectors. This means that any measurement performed at the LHC is corrupted by a small amount of random noise. For example, when the energy of a particle is measured in a calorimeter, the value might only be accurate to the first decimal place with the rest driven by stochastic noise.

Now, imagine measuring a spectrum of these energies. Because of the noisy energy measurements, the spectrum will appear to be “blurred” or “smeared” with respect to the true physical spectrum. The unfolding problem is then to use the smeared spectrum to reconstruct the physical “uncorrupted” spectrum. This is illustrated in Fig. 1.

This may sound a bit abstract so far, but we are all familiar with a similar problem from digital photography. Whenever you take a picture with your digital camera, the picture will appear slightly blurred because of imperfections in the optics of the camera. A problem analogous to unfolding is to take this blurred image and to find a way to deblur it in order to recover the underlying sharp image. In fact, we can think of the LHC detectors as extremely complicated digital cameras which take three-dimensional images of particle collisions. The unfolding problem can then be understood as the problem of deblurring the information extracted from these images.

It turns out that this problem is a really tough nut to crack. The reason is that there is a very large amount of unfolded spectra, which, within statistical uncertainties, can explain any given smeared spectrum. In mathematical jargon, such problems are called ill-posed inverse problems. What this means in practice is that standard statistical techniques, such as maximum likelihood estimation, tend to give unfolded estimates which make no sense at all, with uncertainties that are way off the scale.

The way to overcome these issues is to reduce the amount of possible solutions by using a priori information about physically plausible unfolded spectra. The typical way to introduce a priori information is to require the solutions to be smooth functions. This sounds like a sensible thing to do since typical high energy physics spectra are indeed smooth. Many unfolding techniques currently used in LHC data analysis leverage on this smoothness assumption.

The problem with this assumption is, however, that one has to decide how smooth the solution should be. We know that the solution should generally be smooth, but it is very difficult to decide how much smoothness should actually be required. One consequence of this is that it becomes very difficult to quantify the uncertainty associated with unfolding that is based on smoothness assumptions. This is indeed a major shortcoming of existing unfolding techniques: it is very difficult to give reliable estimates of their uncertainty.

This brings us to my latest work on this problem which I described in a seminar at the University of Padova in December. The key idea of our proposal is to replace the smoothness assumption with assumptions concerning the shape of the unfolded spectrum. These shape constraints can for example concern the positivity, monotonicity, convexity or unimodality of the solution. Such shape assumption are not only an effective way of constraining the set of possible solutions, but more importantly these are simple yes-or-no assumptions, where the physicist can usually use his insight about the problem to say if the assumption is realistic or not.

For example, any physically meaningful solution should be positive. The positivity constraint alone will not restrict the solution space very much, but in high energy physics applications we luckily usually know that the solution should also be monotonic and convex. This is especially true for the case of steeply falling differential cross sections, which are very commonly encountered in unfolding analyses. Fig. 2 demonstrates the improvement one can obtain by imposing shape constraints on the unfolded solution.

The beauty of shape-constrained unfolding is that we can use this approach to compute reliable uncertainties in the unfolded space. This is done by looking at the set of all those solutions that are, up to statistical uncertainties, consistent with the smeared observations while also satisfying the shape constraints. All solutions in this set are possible explanations for the observed data and hence this set provides a rigorous estimate of the uncertainty in the unfolded space.

In technical terminology, the confidence intervals derived from this set have guaranteed frequentist coverage, provided that the true solution satisfies the shape constraints. The key here is to ask the solutions to both explain the data and satisfy the shape assumptions – without the latter, the uncertainty estimates would blow up and we would be back to square one.

We demonstrated this approach using a simulation study designed to mimic the unfolding of the inclusive jet transverse momentum spectrum in the CMS experiment. The resulting unfolded uncertainties are shown in Fig. 3 for different shape constraints.

In this particular case, the true solution is known to be both positive, decreasing and convex and imposing all these constraints helps to significantly reduce our uncertainty about the unfolded spectrum. The envelopes shown in this figure have a very precise scientific meaning. Namely, if the experiment was repeated many times, they are guaranteed to contain the true solution at least 95% of the time.

In other words, in the long run, at least 95% of our unfolded results would be correct in the sense that the envelope contains the truth. And this is precisely the key advantage of our new method: it guarantees by construction that the unfolded uncertainties have this important property, while none of the existing techniques based on smoothness assumptions can provide the same guarantee.

Further information about this work is available in our arXiv preprint.

## 1 Pingback