Search

AMVA4NewPhysics

A Marie Sklodowska-Curie ITN funded by the Horizon2020 program of the European Commission

Category

neural networks

Journey through Fast.AI: II – Columnar data

by Giles Strong

Welcome back to the second part of my journey through the Fast.AI deep-learning course; beginning section here. Last time I gave an example of analysing images, now I’ll move on to working with columnar data.

Columnar data is a form of structured data, meaning that the features of the data are already extracted (in this case into columns), unlike in images or audio where features must be learnt or carefully constructed by hand. Continue reading “Journey through Fast.AI: II – Columnar data”

Journey through Fast.AI: I – Introduction and image data

by Giles Strong

For the past few months I’ve been following the Fast.AI Deep-Learning for Coders course. An online series of lectures accompanied with Jupyter notebooks and python library built around PyTorch. The course itself is split into two halves: the first uses a top-down approach to teach state of the art techniques and best practices for deep learning in order to achieve top results on well established problems and datasets, with later lessons delving deeper into the code and mathematics; the second half deals with more with the cutting edge of deep learning, and focuses on less-well-founded problems, such as generative modelling, and recent experimental technologies which are still be developed. Continue reading “Journey through Fast.AI: I – Introduction and image data”

Advanced Results in Lisbon

by Tommaso Dorigo

This week the VII AMVA4NewPhysics workshop is under way in the premises of LIP in Lisbon. During these events the network gets together to discuss the status of the various projects, plan future events and activities, take action on arisen issues, and vote on budget and other topics. But this is a special event in the lifetime of the network, as we are getting toward the mature stage – we are in the  Continue reading “Advanced Results in Lisbon”

Can Neural Networks Design The Detector Of A Future Particle Collider?

by Tommaso Dorigo

Casual reader, be warned – the contents of this article, specifically the second part of it, are highly volatile, speculative stuff. But hey, that is the stuff that dreams are made of. And I have one or two good reasons to dream on.


The environment

Machine Learning is ubiquitous today. Self-driving cars; self-shaving robots (just kidding, but I’m sure they can be constructed if the need arises); programs that teach themselves chess and become world-champion-class players overnight; Siri; google search engines; google translate – okay, I am going too far. But you know it: machine learning has become a player in almost Continue reading “Can Neural Networks Design The Detector Of A Future Particle Collider?”

Hyper-parameters revisited

by Giles Strong

Introduction

Well folks, it’s been quite a while since my last post; apologies for that, it’s been a busy few months recently.

Towards the end of last year I wrote a post on optimising the hyper parameters (depth, width, learning rate, et cetera) of neural networks. In this post I described how I was trying to use Bayesian methods to ‘quickly’ find useful sets of parameters. Continue reading “Hyper-parameters revisited”

Train-time/test-time data augmentation

by Giles Strong

The week before last I was presenting an update of some of my analysis work to the rest of my group. The work involved developing a neural-network to classify particle-collisions at the LHC. Continue reading “Train-time/test-time data augmentation”

Higgs Hacking

by Giles Strong

A few days before I returned from CERN at the beginning of the month, I attended a talk on the upcoming TrackML challenge. This is a competition beginning this month in which members of the public will be invited to try and find a solution to the quite tricky problem of accurate reconstruction of particle trajectories in the collisions at the LHC. The various detectors simply record the hits where particles pass by, however to make use of this data, the hits in surrounding detector layers must be combined into a single flight path, called a track. Continue reading “Higgs Hacking”

Adjusting hyper-parameters: First step into Bayesian optimisation of DNNs

by Giles Strong

A few months ago I wrote about some work I was doing on improving the way a certain kind of particle is detected at CMS, by replacing the existing algorithm with a neural network. I recently resumed this work and have now got to the point where I show significant improvement over the existing method. The design of the neural network, however, was one that I imported from some other work, and what I want to do is to adjust it to better suit my problem. Continue reading “Adjusting hyper-parameters: First step into Bayesian optimisation of DNNs”

Classification with autoencoders: idle thought to working prototype in 2 hours

by Giles Strong

Continuing the series of 101 things to do in the cramped confines of a budget airliner:

Last Saturday evening I flew back from the mid-term meeting of my research network. The trip from Brussels to Lisbon takes about three hours, and since my current work requires an internet connection, I’d planned to relax (as best I could). Idle thoughts, however, during a pre-flight Duvel had got me thinking about autoencoders. Continue reading “Classification with autoencoders: idle thought to working prototype in 2 hours”

Blog at WordPress.com.

Up ↑