Yesterday Mikael Kuusela gave a well-attended seminar at the Department of Statistical Sciences of the University of Padova. Mikael is a member of the AMVA4NewPhysics network and is about to obtain his doctorate in Statistics at the Ecole Polytechnique Fédérale de Lausanne (EPFL) under the advisory of Victor Panaretos. He is also a member of the CMS collaboration and he participates in many ways to the research activities of the experiment. Among them, he acts as a consultant for the CMS Statistics Committee, the body responsible of ensuring the correctness and soundness of statistical procedures used by CMS analyses to produce scientific results.

Mikael was invited to Padova by prof. Bruno Scarpa, a member of the UNIPD node of the network, to present the results of his new unfolding method, which promises to be of special interest to physicists but which is of course also relevant to statistical studies. And the timing of the seminar could not have been a better one: the paper describing the new unfolding procedure has been posted on the Cornell ArXiv two days ago!

It would be very hard for me to make a good job at explaining the details of Mikael’s work. What I can certainly do is explain the general issue and the way he attacks it. So, unfolding is a technique by means of which one tries to remove the effect of a instrumental noise on a measurement, obtaining a more precise estimate of the true value of the quantity being measured.

If, for instance, a point-like source of light (like a star) produces a fuzzy ball on your picture once you “fold in” the imperfect position measurement of the photons you recorded, you can -by knowing exactly the characteristics of your camera and lens- “unfold” the smearing effect, getting back to a very narrow image. The problem is very general, as you may well understand, and common to high-energy physics where our detectors can be thought of as giant digital cameras, performing millions of measurements with imperfect resolution.

There exist many methods to unfold data. All of them require a dose of “regularization” to produce a meaningful result, for mathematical reasons which I wish to leave aside now. The regularization techniques usually applied in the most common unfolding methods (SVD, D’Agostini, etcetera) are however potentially dangerous as their intrinsic arbitrariness spoils the correctness of the uncertainties one can derive for the unfolded results. And physicists really care about producing results whose uncertainties are precisely determined.

20151204_150137

Mikael’s regularization scheme is “physics driven”: he uses the fact that often physicists know some basic properties of the quantities they measure. For instance, if one measures the energy of jets in proton-proton collisions at the LHC, one knows that the probability of observing a jet is a decreasing function of its energy. Furthermore one expects the distribution to have other properties –  positiveness and convexity are among them. The method allows to incorporate these constraints in the result of the unfolding procedure, in a way that ensures the correct estimate of the uncertainties of the result. The so-called “coverage properties” of the resulting error bars, that is.

As I realize I am getting too deep into the matter, I think it is better to close this post here, in the hope that Mikael will be able to explain the method in detail in a future post here, better than I could possibly do. Of course, if you are curious you should definitely have a look at Mikael’s paper here.

(Written by T. Dorigo)