A Marie Sklodowska-Curie ITN funded by the Horizon2020 program of the European Commission



Hyper-parameters revisited

by Giles Strong


Well folks, it’s been quite a while since my last post; apologies for that, it’s been a busy few months recently.

Towards the end of last year I wrote a post on optimising the hyper parameters (depth, width, learning rate, et cetera) of neural networks. In this post I described how I was trying to use Bayesian methods to ‘quickly’ find useful sets of parameters. Continue reading “Hyper-parameters revisited”

Are your analyses reproducible?

by Pablo de Castro

One of the main principles of the scientific method is reproducibility, which could be defined as the possibility to duplicate an entire experiment or study independently in the future.

For those doing scientific data analyses, like the members of this network, the same principle applies, so that all the data, methods, and tools should be provided and documented with enough detail to allow other researchers to obtain exactly the same results for the same datasets or to redo the analysis with new data. Do you think this is an unrealistic expectation or the way to go?

Continue reading “Are your analyses reproducible?”

Blog at

Up ↑