Summer 2018’s been a busy time for the AMVA4NewPhysics network; we’ve had workshops, outreach events, training sessions, meetings, and many more things. I wanted to go through and pick out a few thinks I was involved in. Continue reading “Science in the sun: AMVA4NP’s summer events”
Well folks, it’s been quite a while since my last post; apologies for that, it’s been a busy few months recently.
Towards the end of last year I wrote a post on optimising the hyper parameters (depth, width, learning rate, et cetera) of neural networks. In this post I described how I was trying to use Bayesian methods to ‘quickly’ find useful sets of parameters. Continue reading “Hyper-parameters revisited”
The week before last I was presenting an update of some of my analysis work to the rest of my group. The work involved developing a neural-network to classify particle-collisions at the LHC. Continue reading “Train-time/test-time data augmentation”
A few days before I returned from CERN at the beginning of the month, I attended a talk on the upcoming TrackML challenge. This is a competition beginning this month in which members of the public will be invited to try and find a solution to the quite tricky problem of accurate reconstruction of particle trajectories in the collisions at the LHC. The various detectors simply record the hits where particles pass by, however to make use of this data, the hits in surrounding detector layers must be combined into a single flight path, called a track. Continue reading “Higgs Hacking”
Bonjour! As I write, I’m three weeks into my month long secondment at CERN, near Geneva. CERN, home to the Large Hadron Collider, is the world’s largest particle-physics research centre. It is also the location of the CMS experiment, which I work on. Continue reading “Staying at CERN”
Cover photo unrelated – it’s just some rad fractal broccoli.
Just over one and a half years ago I wrote a post on some of the tips and tricks I’d found useful in trying to organise myself and improve my efficiency. Searching for a post topic, it was suggested that I revisit this to compare how my workload and approaches have changed, so here goes! Continue reading “Efficiency revisited”
A few months ago I wrote about some work I was doing on improving the way a certain kind of particle is detected at CMS, by replacing the existing algorithm with a neural network. I recently resumed this work and have now got to the point where I show significant improvement over the existing method. The design of the neural network, however, was one that I imported from some other work, and what I want to do is to adjust it to better suit my problem. Continue reading “Adjusting hyper-parameters: First step into Bayesian optimisation of DNNs”
For the past six or so years, I’ve practised a martial art called Shorinji Kempo. Like many other arts, it incorporates several philosophies and concepts. One of these, The Three Teachings of Ken, concerns one’s progression in learning the various techniques. Simply put, it describes three stages of mastery: shu – learn & copy, ha – adjust & adapt, ri – master & break free. Continue reading “On acquiring skills – shu, ha, ri”
Continuing the series of 101 things to do in the cramped confines of a budget airliner:
Last Saturday evening I flew back from the mid-term meeting of my research network. The trip from Brussels to Lisbon takes about three hours, and since my current work requires an internet connection, I’d planned to relax (as best I could). Idle thoughts, however, during a pre-flight Duvel had got me thinking about autoencoders. Continue reading “Classification with autoencoders: idle thought to working prototype in 2 hours”