A year ago I posted an article that visualised with word clouds subjects touched by the authors of this blog. The clouds contained stemmed and filtered nouns and verbs used in posts for each author that had produced at least 3 articles. Giles had suggested to take up the argument again the following year for a comparison, so here it is. Continue reading “Summarising blog content”
Continuing the series of 101 things to do in the cramped confines of a budget airliner:
Last Saturday evening I flew back from the mid-term meeting of my research network. The trip from Brussels to Lisbon takes about three hours, and since my current work requires an internet connection, I’d planned to relax (as best I could). Idle thoughts, however, during a pre-flight Duvel had got me thinking about autoencoders. Continue reading “Classification with autoencoders: idle thought to working prototype in 2 hours”
So, it’s been a while since my last post, apologies for that, but the summer has been both busy and eventful, so let me summarise what’s been happening. Continue reading “Summer activities at LIP-Lisbon”
Both the CMS and ATLAS collaborations are pretty vast, with around 5000 qualified scientist between them, and even more members working towards qualification. Everyone listed as ‘qualified’ will be listed as an author on any publication the collaboration produces, regardless of who actually did the major work for the analysis. Continue reading “Tau Identification At CMS With Neural Networks”
Have you ever wondered how Facebook suggests the tags for the picture you post on your wall, or how the photo library on your computer manages to automatically create albums containing pictures of particular people? Well, they use facial recognition software based on Convolutional Neural Network (CNN).
CNN is the most popular and effective method for object recognition, and it is a specialized kind of neural network for processing data that has a known grid-like topology. The network employs a mathematical operation Continue reading “Convolutional Neural Networks and neutrinos”
Last time we looked at how we can could fix some of the problems that were responsible for limiting the size of networks we could train. Here we will be covering some additions we can make to the models in order to further increase their power. Having learnt how to build powerful networks, we will also look into why exactly neural-networks can be so much more powerful than other methods.
Continue reading “Understanding Neural-Networks: Part IV – Improvements & Advantages”
So it’s 1986, we’ve got a mathematically sensible way of optimising our networks, but they’re still not as performant as other methods… Well, we know that adding more layers will increase their power, let’s just keep making them larger. Oh no! Now the network no longer trains! It just sits there refusing to optimise. Continue reading “Understanding Neural-Networks: Part III – Diagnosis and treatment”
Welcome back to the second part of my introduction into how neural-networks function! If you missed the first part, you can read it here.
When we left off, we’d understood that a neural network aims to form a predictive model by building a mathematical map from features in the data to a desired output. This map takes the form of layers of neurons, each applying a basic function. The map is built by altering the weights each neuron applies to the inputs. By aiming to minimise the loss function, which characterises the performance of the network, the optimal values of these weights may be learnt. We found that this can be a difficult task due to the large number of free parameters, but luckily the loss function is populated by many equally optimal minima. We simply need to reach one, and can therefore employ the gradient descent algorithm. Continue reading “Understanding Neural-Networks: Part II – Back-propagation”
Below is a short summary of the IML workshop at CERN, which Markus Stoye has also reported on in the previous post.
Day 1 was a discussion with industry experts about the state and future of ML. In the afternoon there was work on the community white-paper that the IML plans to publish. This document is meant to be a road-map for where we want HEP to be in 10 years time with regards to ML. The proto-document is Continue reading “Some More Info on the IML Workshop”