I am a Postdoc working on machine learning methods and applications for astrophysics.
I did my PhD and master's degree in physics, and bachelor's degrees in both math and physics at the Ludwig-Maximilians Universität Munich.
The supervisor of my PhD thesis is Torsten Enßlin.
My interests are artificial intelligence, programming and Bayesian statistics.
The main objective of my PhD has been the mapping of interstellar dust within our corner of the milky way. This inference task with hundreds of millions of parameters was carried out on a super-computer and uses the combined data of many stellar surveys. The resulting map covers a volume of about (700pc)³ with a pixel resolution of 1pc (One pc is about 3.3 light years). Due to the Bayesian nature of the method that was specifically designed for this task, we get very precise distance estimates for all larger dust clouds, and uncertainty estimates on all results.
Connect Four is a small strategic board game. In this free-time project, I implemented a simple Q-learner that learns to play the game via self-play. My code implements the network, environment and memory buffer from scratch.
NIFTy is a software library for Gaussian process regression in Python. Its core feature since version one has been the consistent treatment of discretization of continuous fields. Since version 5, it also provides auto-differentiation and the modern inference algorithm metric Gaussian variational inference. I have been contributing to the project since version 3 by implementing features and bringing in ideas on the fundamental structure of the package.
Neural networks suffer from the vanishing/exploding gradient problem. In this small free-time project, I investigated the effect of multiplying the gradient with static positive weights. The weights are obtained through statistical considerations, such that the expected magnitude of the gradient is a fixed percentage of the weights when training begins. This leads to faster training in the beginning of the first training epoch, but is overtaken by standard approaches later on. If one could update the weights that precondition the gradient, training might be faster overall.
Here is a list of publications I contributed to. You can also find me on Google scholar.