Search

AMVA4NewPhysics

A Marie Sklodowska-Curie ITN funded by the Horizon2020 program of the European Commission

Tag

neural networks

Classification with autoencoders: idle thought to working prototype in 2 hours

by Giles Strong

Continuing the series of 101 things to do in the cramped confines of a budget airliner:

Last Saturday evening I flew back from the mid-term meeting of my research network. The trip from Brussels to Lisbon takes about three hours, and since my current work requires an internet connection, I’d planned to relax (as best I could). Idle thoughts, however, during a pre-flight Duvel had got me thinking about autoencoders. Continue reading “Classification with autoencoders: idle thought to working prototype in 2 hours”

Understanding Neural-Networks: Part IV – Improvements & Advantages

by Giles Strong

Welcome to the final instalment of my series on neural networks. If you’re just joining us, previous parts are here, here, and here.

Last time we looked at how we can could fix some of the problems that were responsible for limiting the size of networks we could train. Here we will be covering some additions we can make to the models in order to further increase their power. Having learnt how to build powerful networks, we will also look into why exactly neural-networks can be so much more powerful than other methods.
Continue reading “Understanding Neural-Networks: Part IV – Improvements & Advantages”

Understanding Neural-Networks: Part III – Diagnosis and treatment

by Giles Strong

Welcome to the third part of my introduction to understanding neural networks; previous parts here and here in case you missed them.

So it’s 1986, we’ve got a mathematically sensible way of optimising our networks, but they’re still not as performant as other methods… Well, we know that adding more layers will increase their power, let’s just keep making them larger. Oh no! Now the network no longer trains! It just sits there refusing to optimise. Continue reading “Understanding Neural-Networks: Part III – Diagnosis and treatment”

Understanding Neural-Networks: Part II – Back-propagation

by Giles Strong

Welcome back to the second part of my introduction into how neural-networks function! If you missed the first part, you can read it here.

When we left off, we’d understood that a neural network aims to form a predictive model by building a mathematical map from features in the data to a desired output. This map takes the form of layers of neurons, each applying a basic function. The map is built by altering the weights each neuron applies to the inputs. By aiming to minimise the loss function, which characterises the performance of the network, the optimal values of these weights may be learnt. We found that this can be a difficult task due to the large number of free parameters, but luckily the loss function is populated by many equally optimal minima. We simply need to reach one, and can therefore employ the gradient descent algorithm. Continue reading “Understanding Neural-Networks: Part II – Back-propagation”

Understanding Neural-Networks: Part I

by Giles Strong

Last week, as part of one of my PhD courses, I gave a one hour seminar covering one of the machine learning tools which I have used extensively in my research: neural networks. Preparation of the seminar was very useful for me, since it required me to make sure that I really understood how the networks function, and I (think I) finally got my head around back-propagation – more on that later. In this post, and depending on length, the next (few), I intend to reinterpret my seminar into something which might be of use to you, dear reader. Here goes!

A neural network is a method in the field of machine learning. This field aims to build predictive models to help solve complex tasks by exposing a flexible system to a large amount of data. The system is then allowed to learn by itself how to best form its predictions. Continue reading “Understanding Neural-Networks: Part I”

Six months in

by Giles Strong

Ciao. As the title suggests, it’s been about half a year now since I started my PhD research, and last week I presented a summary of my work so far to the CMS group here in Padova. I thought it would be an interesting exercise to translate my presentation into a more blog-friendly form, but for the more scientifically minded, I’ll link the original at the end. Here goes! Continue reading “Six months in”

MLHEP School in Lund

by Giles Strong

Hej! It’s been about a week now since I returned from Sweden, where I’d attended an excellent school on machine learning at Lund University. The course consisted of a series of lectures and seminars which started from the very basics of machine learning, and finished with us training convolutional neural-networks on GPU clusters kindly lent to us by the Finnish National Supercomputing Centre!

Lund is a small university-town in south-west Sweden, and is Continue reading “MLHEP School in Lund”

Blog at WordPress.com.

Up ↑