Search

AMVA4NewPhysics

A Marie Sklodowska-Curie ITN funded by the Horizon2020 program of the European Commission

Category

Software

Classification with autoencoders: idle thought to working prototype in 2 hours

by Giles Strong

Continuing the series of 101 things to do in the cramped confines of a budget airliner:

Last Saturday evening I flew back from the mid-term meeting of my research network. The trip from Brussels to Lisbon takes about three hours, and since my current work requires an internet connection, I’d planned to relax (as best I could). Idle thoughts, however, during a pre-flight Duvel had got me thinking about autoencoders. Continue reading “Classification with autoencoders: idle thought to working prototype in 2 hours”

Summer activities at LIP-Lisbon

by Giles Strong

So, it’s been a while since my last post, apologies for that, but the summer has been both busy and eventful, so let me summarise what’s been happening. Continue reading “Summer activities at LIP-Lisbon”

First CMS Physics Object School in Bari

by Ioanna Papavergou

One of the best parts of being a physics PhD student is having the chance to broaden your knowledge by attending seminars and schools especially designed for helping you to be more efficient in your research. I was fortunate to have such an opportunity by attending the first CMS Physics Object School (POS) which took place from September 4th to 8th in Bari, Italy. Continue reading “First CMS Physics Object School in Bari”

My impressions on the RooStats Tutorial

by Greg Kotkowski

On the 19th of May I was very glad to take part in the RooStats tutorial organised by the AMVA4NewPhysics Network as a part of a workshop in Oviedo. RooStats is a ROOT library that uses the “RooFit” package, and provides classes to perform statistical analysis. The tutorial was attended by all the ESR from our Network, among which I was the only non-physicist. I am a statistician who does not use ROOT at all. For this reason, my attendance at the tutorial could seem Continue reading “My impressions on the RooStats Tutorial”

Convolutional Neural Networks and neutrinos

by Cecilia Tosciri

Have you ever wondered how Facebook suggests the tags for the picture you post on your wall, or how the photo library on your computer manages to automatically create albums containing pictures of particular people? Well, they use facial recognition software based on Convolutional Neural Network (CNN).

CNN is the most popular and effective method for object recognition, and it is a specialized kind of neural network for processing data that has a known grid-like topology. The network employs a mathematical operation Continue reading “Convolutional Neural Networks and neutrinos”

Understanding Neural-Networks: Part IV – Improvements & Advantages

by Giles Strong

Welcome to the final instalment of my series on neural networks. If you’re just joining us, previous parts are here, here, and here.

Last time we looked at how we can could fix some of the problems that were responsible for limiting the size of networks we could train. Here we will be covering some additions we can make to the models in order to further increase their power. Having learnt how to build powerful networks, we will also look into why exactly neural-networks can be so much more powerful than other methods.
Continue reading “Understanding Neural-Networks: Part IV – Improvements & Advantages”

Understanding Neural-Networks: Part III – Diagnosis and treatment

by Giles Strong

Welcome to the third part of my introduction to understanding neural networks; previous parts here and here in case you missed them.

So it’s 1986, we’ve got a mathematically sensible way of optimising our networks, but they’re still not as performant as other methods… Well, we know that adding more layers will increase their power, let’s just keep making them larger. Oh no! Now the network no longer trains! It just sits there refusing to optimise. Continue reading “Understanding Neural-Networks: Part III – Diagnosis and treatment”

Understanding Neural-Networks: Part II – Back-propagation

by Giles Strong

Welcome back to the second part of my introduction into how neural-networks function! If you missed the first part, you can read it here.

When we left off, we’d understood that a neural network aims to form a predictive model by building a mathematical map from features in the data to a desired output. This map takes the form of layers of neurons, each applying a basic function. The map is built by altering the weights each neuron applies to the inputs. By aiming to minimise the loss function, which characterises the performance of the network, the optimal values of these weights may be learnt. We found that this can be a difficult task due to the large number of free parameters, but luckily the loss function is populated by many equally optimal minima. We simply need to reach one, and can therefore employ the gradient descent algorithm. Continue reading “Understanding Neural-Networks: Part II – Back-propagation”

Some More Info on the IML Workshop

by Giles Strong

Below is a short summary of the IML workshop at CERN, which Markus Stoye has also reported on in the previous post.

Day 1 was a discussion with industry experts about the state and future of ML. In the afternoon there was work on the community white-paper that the IML plans to publish. This document is meant to be a road-map for where we want HEP to be in 10 years time with regards to ML. The proto-document is Continue reading “Some More Info on the IML Workshop”

Blog at WordPress.com.

Up ↑