Search

AMVA4NewPhysics

A Marie Sklodowska-Curie ITN funded by the Horizon2020 program of the European Commission

Category

Classification

Tau Identification At CMS With Neural Networks

by Giles Strong

Both the CMS and ATLAS collaborations are pretty vast, with around 5000 qualified scientist between them, and even more members working towards qualification. Everyone listed as ‘qualified’ will be listed as an author on any publication the collaboration produces, regardless of who actually did the major work for the analysis. Continue reading “Tau Identification At CMS With Neural Networks”

Convolutional Neural Networks and neutrinos

by Cecilia Tosciri

Have you ever wondered how Facebook suggests the tags for the picture you post on your wall, or how the photo library on your computer manages to automatically create albums containing pictures of particular people? Well, they use facial recognition software based on Convolutional Neural Network (CNN).

CNN is the most popular and effective method for object recognition, and it is a specialized kind of neural network for processing data that has a known grid-like topology. The network employs a mathematical operation Continue reading “Convolutional Neural Networks and neutrinos”

Understanding Neural-Networks: Part IV – Improvements & Advantages

by Giles Strong

Welcome to the final instalment of my series on neural networks. If you’re just joining us, previous parts are here, here, and here.

Last time we looked at how we can could fix some of the problems that were responsible for limiting the size of networks we could train. Here we will be covering some additions we can make to the models in order to further increase their power. Having learnt how to build powerful networks, we will also look into why exactly neural-networks can be so much more powerful than other methods.
Continue reading “Understanding Neural-Networks: Part IV – Improvements & Advantages”

Understanding Neural-Networks: Part III – Diagnosis and treatment

by Giles Strong

Welcome to the third part of my introduction to understanding neural networks; previous parts here and here in case you missed them.

So it’s 1986, we’ve got a mathematically sensible way of optimising our networks, but they’re still not as performant as other methods… Well, we know that adding more layers will increase their power, let’s just keep making them larger. Oh no! Now the network no longer trains! It just sits there refusing to optimise. Continue reading “Understanding Neural-Networks: Part III – Diagnosis and treatment”

Understanding Neural-Networks: Part II – Back-propagation

by Giles Strong

Welcome back to the second part of my introduction into how neural-networks function! If you missed the first part, you can read it here.

When we left off, we’d understood that a neural network aims to form a predictive model by building a mathematical map from features in the data to a desired output. This map takes the form of layers of neurons, each applying a basic function. The map is built by altering the weights each neuron applies to the inputs. By aiming to minimise the loss function, which characterises the performance of the network, the optimal values of these weights may be learnt. We found that this can be a difficult task due to the large number of free parameters, but luckily the loss function is populated by many equally optimal minima. We simply need to reach one, and can therefore employ the gradient descent algorithm. Continue reading “Understanding Neural-Networks: Part II – Back-propagation”

Some More Info on the IML Workshop

by Giles Strong

Below is a short summary of the IML workshop at CERN, which Markus Stoye has also reported on in the previous post.

Day 1 was a discussion with industry experts about the state and future of ML. In the afternoon there was work on the community white-paper that the IML plans to publish. This document is meant to be a road-map for where we want HEP to be in 10 years time with regards to ML. The proto-document is Continue reading “Some More Info on the IML Workshop”

Big LHC Experiments Go Deep

by Markus Stoye

This week the first Inter-experimental LHC Machine Learning IML workshop took place at CERN. I showed my results on using deep learning for hadronic particle labeling (flavour tagging), a method that offers significant improvements in the labeling of heavy flavour jets for the CMS experiment (which I am member of). Despite deep learning as a topic is all over the media, the big CERN experiments have not used it a lot this far. In fact my application is, to my knowledge, the very first deep-learning application in CMS reconstruction.

The workshop featured several presentations on deep learning using Continue reading “Big LHC Experiments Go Deep”

Do Not Name Him Donald!

by Grzegorz Kotkowski

Recently I’ve encountered an interesting article about the trends of the female names in the US. It shows the impact of the famous Disney Movies on the names that are given to the newborns. As the “Frozen” movie has become very popular a lot of girls born in 2014 got names as Elsa or  Merida.

I want to consider the same dataset in order to perform the analogous analysis but for names of the US presidents. My guess is that it should well represent if a Continue reading “Do Not Name Him Donald!”

AMVA4NewPhysics Deliverable 1.1: MVA for Higgs Boson Searches at the LHC

by the AMVA4NP press office

It is with a certain satisfaction that I can announce today that the AMVA4NewPhysics network is in complete control of its planned schedule, and has now started to provide real research-grade output, delivering its first two scientific products of relevance. Deliverable 1.1 (from work package 1, which focuses on MVA applications to Higgs boson studies) and Deliverable 4.1 (from work package 4, which focuses on the development of entirely new Machine Learning tools with in mind their application to specific HEP Continue reading “AMVA4NewPhysics Deliverable 1.1: MVA for Higgs Boson Searches at the LHC”

Blog at WordPress.com.

Up ↑