Hello there! My name is Seng Pei Liew, and I am the “Marie Sklodowska-Curie Actions” Innovative Training Network (AMVA4NewPhysics) early-stage researcher based at the Technische Universitat Munich (TUM). I am tasked in the network to look for new physics at the Large Hadron Collider (LHC) using advanced statistical tools. In my very first article here, I would like to talk Continue reading “Dark Matter Hunting at the LHC”
After a lot of agonizing work on tiny systematic uncertainties, the ATLAS collaboration released in time for the Moriond conference their latest measurement of the W boson mass (in fact the only one so far). The result is in close match with previous determinations, and has a slightly larger error bar than those. So why bother discussing it here ?
There is a reason. The W boson is one of the most important subatomic particles Continue reading “W Mass: Closing In”
Welcome back to the second part of my introduction into how neural-networks function! If you missed the first part, you can read it here.
When we left off, we’d understood that a neural network aims to form a predictive model by building a mathematical map from features in the data to a desired output. This map takes the form of layers of neurons, each applying a basic function. The map is built by altering the weights each neuron applies to the inputs. By aiming to minimise the loss function, which characterises the performance of the network, the optimal values of these weights may be learnt. We found that this can be a difficult task due to the large number of free parameters, but luckily the loss function is populated by many equally optimal minima. We simply need to reach one, and can therefore employ the gradient descent algorithm. Continue reading “Understanding Neural-Networks: Part II – Back-propagation”
Below is a short summary of the IML workshop at CERN, which Markus Stoye has also reported on in the previous post.
Day 1 was a discussion with industry experts about the state and future of ML. In the afternoon there was work on the community white-paper that the IML plans to publish. This document is meant to be a road-map for where we want HEP to be in 10 years time with regards to ML. The proto-document is Continue reading “Some More Info on the IML Workshop”
by Markus Stoye
This week the first Inter-experimental LHC Machine Learning IML workshop took place at CERN. I showed my results on using deep learning for hadronic particle labeling (flavour tagging), a method that offers significant improvements in the labeling of heavy flavour jets for the CMS experiment (which I am member of). Despite deep learning as a topic is all over the media, the big CERN experiments have not used it a lot this far. In fact my application is, to my knowledge, the very first deep-learning application in CMS reconstruction.
The workshop featured several presentations on deep learning using Continue reading “Big LHC Experiments Go Deep”
As the few regulars of this blog know, the AMVA4NewPhysics network has in its genes a strong will to fight for gender neutrality in its areas of operation – research in Particle Physics and Applied Statistics. We started off this endeavour 2.5 years ago by including three women as PI of beneficiary nodes out of a total of eight, which was *almost* good. But their research record was outstanding, too, which helped us getting funded!
So that was easy. What was less easy was to deliver what we promised in our programme – a hiring practice capable of producing a gender-balanced pool Continue reading “Fighting Gender Bias”
Last week, as part of one of my PhD courses, I gave a one hour seminar covering one of the machine learning tools which I have used extensively in my research: neural networks. Preparation of the seminar was very useful for me, since it required me to make sure that I really understood how the networks function, and I (think I) finally got my head around back-propagation – more on that later. In this post, and depending on length, the next (few), I intend to reinterpret my seminar into something which might be of use to you, dear reader. Here goes!
A neural network is a method in the field of machine learning. This field aims to build predictive models to help solve complex tasks by exposing a flexible system to a large amount of data. The system is then allowed to learn by itself how to best form its predictions. Continue reading “Understanding Neural-Networks: Part I”
While I was busy reporting the talks at the “Neutrino Telescope” conference in Venice, LHCb released a startling new result, which I have not much time to describe in much detail this evening (it’s Friday evening here in Italy and I’m going to call the week off), and yet wish to share with you as soon as possible.
The spectroscopy of low- and intermediate-mass hadrons (whatever this means) is a complex topic which either enthuses particle Continue reading “Five New Resonances Discovered by LHCb – Huh, So What ?”
A few years ago, back when I was a Summer Student at CERN, me and other physics students had some debate on the origins of the word “hysteresis.” We were just coming back from CinéTransat – an open-air cinema at La Perle du Lac park in Geneva, and having a random chat until Sabina jokingly accused Josefa of being hysteric, for a reason I can’t remember right now. From there, they started a discussion on how the word “hysteric” was related to “uterus” (at some point in the past, hysteria was defined as a psychological disorder related to the organ), and to “hysterectomy” (the removal Continue reading “On the origins of “hysteresis””