Search

AMVA4NewPhysics

A Marie Sklodowska-Curie ITN funded by the Horizon2020 program of the European Commission

Category

neural networks

Hyper-parameters revisited

by Giles Strong

Introduction

Well folks, it’s been quite a while since my last post; apologies for that, it’s been a busy few months recently.

Towards the end of last year I wrote a post on optimising the hyper parameters (depth, width, learning rate, et cetera) of neural networks. In this post I described how I was trying to use Bayesian methods to ‘quickly’ find useful sets of parameters. Continue reading “Hyper-parameters revisited”

Train-time/test-time data augmentation

by Giles Strong

The week before last I was presenting an update of some of my analysis work to the rest of my group. The work involved developing a neural-network to classify particle-collisions at the LHC. Continue reading “Train-time/test-time data augmentation”

Higgs Hacking

by Giles Strong

A few days before I returned from CERN at the beginning of the month, I attended a talk on the upcoming TrackML challenge. This is a competition beginning this month in which members of the public will be invited to try and find a solution to the quite tricky problem of accurate reconstruction of particle trajectories in the collisions at the LHC. The various detectors simply record the hits where particles pass by, however to make use of this data, the hits in surrounding detector layers must be combined into a single flight path, called a track. Continue reading “Higgs Hacking”

Adjusting hyper-parameters: First step into Bayesian optimisation of DNNs

by Giles Strong

A few months ago I wrote about some work I was doing on improving the way a certain kind of particle is detected at CMS, by replacing the existing algorithm with a neural network. I recently resumed this work and have now got to the point where I show significant improvement over the existing method. The design of the neural network, however, was one that I imported from some other work, and what I want to do is to adjust it to better suit my problem. Continue reading “Adjusting hyper-parameters: First step into Bayesian optimisation of DNNs”

Classification with autoencoders: idle thought to working prototype in 2 hours

by Giles Strong

Continuing the series of 101 things to do in the cramped confines of a budget airliner:

Last Saturday evening I flew back from the mid-term meeting of my research network. The trip from Brussels to Lisbon takes about three hours, and since my current work requires an internet connection, I’d planned to relax (as best I could). Idle thoughts, however, during a pre-flight Duvel had got me thinking about autoencoders. Continue reading “Classification with autoencoders: idle thought to working prototype in 2 hours”

Summer activities at LIP-Lisbon

by Giles Strong

So, it’s been a while since my last post, apologies for that, but the summer has been both busy and eventful, so let me summarise what’s been happening. Continue reading “Summer activities at LIP-Lisbon”

Tau Identification At CMS With Neural Networks

by Giles Strong

Both the CMS and ATLAS collaborations are pretty vast, with around 5000 qualified scientist between them, and even more members working towards qualification. Everyone listed as ‘qualified’ will be listed as an author on any publication the collaboration produces, regardless of who actually did the major work for the analysis. Continue reading “Tau Identification At CMS With Neural Networks”

Convolutional Neural Networks and neutrinos

by Cecilia Tosciri

Have you ever wondered how Facebook suggests the tags for the picture you post on your wall, or how the photo library on your computer manages to automatically create albums containing pictures of particular people? Well, they use facial recognition software based on Convolutional Neural Network (CNN).

CNN is the most popular and effective method for object recognition, and it is a specialized kind of neural network for processing data that has a known grid-like topology. The network employs a mathematical operation Continue reading “Convolutional Neural Networks and neutrinos”

Understanding Neural-Networks: Part IV – Improvements & Advantages

by Giles Strong

Welcome to the final instalment of my series on neural networks. If you’re just joining us, previous parts are here, here, and here.

Last time we looked at how we can could fix some of the problems that were responsible for limiting the size of networks we could train. Here we will be covering some additions we can make to the models in order to further increase their power. Having learnt how to build powerful networks, we will also look into why exactly neural-networks can be so much more powerful than other methods.
Continue reading “Understanding Neural-Networks: Part IV – Improvements & Advantages”

Blog at WordPress.com.

Up ↑