
Purpose
The idea is to bring together people on the Munich Garching Campus interested in applying Bayesian methods and information theory to their research. The emphasis is on astronomy/astrophysics/cosmology but other applications are welcome. Typical applications are model fitting/evaluation, image deconvolution, spectral analysis. More general statistical topics will also be included. Software packages for Bayesian analysis are another focus.
The initial announcement in 2011 led to a large response and currently we have about 170 people signed up on the mailing list. The meetings are very well attended and give us the motivation to pursue this initiative, now in its 6th year. We aim for one meeting per month, but we are flexible. Friday is the preferred day. We welcome suggestions of speakers including self-suggestions! Your views are important and can be shared via our mailing list and at the meetings.
- January 2018: we are pleased to announce the continued support from the Excellence Cluster Universe (TUM) for this year. This backing since 2015 has allowed us to invite high-profile speakers.
- June 2017: Stella Veith left to Irland, we wish her all the best.
- Februrary 2017: we are pleased to announce the continued support from the Excellence Cluster Universe (TUM) for this year. This backing since 2015 has allowed us to invite high-profile speakers.
- February 2017: we welcome Stella Veith (MPA) to help with the organization.
- January 2016: we were pleased to announce the continued support from the Excellence Cluster Universe (TUM) for that year.
- February 2015: we were pleased to report that we have received financial support from the Excellence Cluster Universe (TUM), allowing us to invite more external speakers in future.
Organizers
- Fabrizia Guglielmetti (ESO)
- Andy Strong (MPE)
- Torsten Enßlin (MPA)
- Paul Nandra (MPE)
- Frederik Beaujean (LMU)
- Allen Caldwell (MPP)
- Udo von Toussaint (IPP)
Mailing list
This is used to inform about meetings, and also can be used to exchange information between members.
To subscribe/unsubscribe click here
Next Talks
- On Model Selection in Cosmology
May 3rd 2019, 14:00: New Seminar room at MPA: Martin Kerscher (LMU Munich)
I will review some of the common methods for model selection: the goodness of fit, the likelihood ratio test, Bayesian model selection using Bayes factors, and the classical as well as the Bayesian information theoretic approaches. I will illustrate these different approaches by comparing models for the expansion history of the Universe. I will highlight the premises and objectives entering these different approaches to model selection and finally recommend the information theoretic approach.
All Talks
- Jaynes's principle and statistical mechanics
- A hierarchical model for the energies and arrival directions of ultra-high energy cosmic rays (UHECR)
- UBIK - a universal Bayesian imaging toolkit
- Bayesian Probabilistic Numerical Methods
- Probabilistic Numerics — Uncertainty in Computation
- Collaborative Nested Sampling for analysing big datasets
- Metrics for Deep Generative Models
- 3D Reconstruction and Understanding of the Real World
- Adaptive Harmonic Mean Integration and Application to Evidence Calculation
- On the Confluence of Deep Learning and Proximal Methods
- Exponential Families on Resource-Constrained Systems
- The rationality of irrationality in the Monty Hall problem
- ROSAT and XMMSLEW2 counterparts using Nway--An accurate algorithm to pair sources simultaneously between N catalogs
- Bayesian calibration of predictive computational models of arterial growth
- Variational Bayesian inference for stochastic processes
- What does Bayes have to say about tensions in cosmology and neutrino mass hierarchy?
- Uncertainty analysis using profile likelihoods and profile posteriors
- Uncertainties of Monte Carlo radiative-transfer simulations
- A Global Bayesian Analysis of Neutrino Mass Data
- A new method for inferring the 3D matter distribution from cosmic shear data
- Phase-space reconstruction of the cosmic large-scale structure
- Field dynamics via approximative Bayesian reasoning
- Surrogate minimization in high dimensions
- Accurate Inference with Incomplete Information
- Simultaneous Bayesian Location and Spectral Analysis of Gamma-Ray Bursts
- Bayesian methods in the search for gravitational waves
- An introduction to QBism
- Statistical Inference in Radio Astronomy: Bayesian Radio Imaging with the RESOLVE package
- Galactic Tomography
- Modern Probability Theory
- Rethinking the Foundations
- Bayesian Multiplicity Control
- Learning Causal Conditionals
- Confidence and Credible Intervals
- Bayes vs frequentist: why should I care?
- Dynamic system classifier
- Bayesian Reinforcement Learning
- PolyChord: next-generation nested sampling
- The Gaussian onion: priors for a spherical world
- Approximate Bayesian Computing (ABC) for model choice
- Bayesian tomography
- Stellar and Galactic Archaeology with Bayesian Methods
- Supersymmetry in a classical world: new insights on stochastic dynamics from topological field theory
- Bayesian regularization for compartment models in imaging
- Approximate Bayesian Computation in Astronomy
- p-Values for Model Evaluation
- Uncertainty quantification for computer models
- A Bayesian method for high precision pulsar timing - Obstacles, narrow pathways and chances
- Information field dynamics
- On the on/off problem
- Bayesian analyses in B-meson physics
- Calibrated Bayes for X-ray spectra
- Unraveling general relationships in multidimensional datasets
- Probabilistic image reconstruction with radio interferometers
- Turbulence modelling and control using maximum entropy principles
- DIP: Diagnostics for insufficiences of posterior calculations - a CMB application
- Bayesian modelling of regularized and gravitationally lensed sources
- Hierarchical modeling for astronomers
- Photometric Redshifts Using Random Forests
- Bayesian search for other Earths: low-mass planets around nearby M dwarfs
- The NIFTY way of Bayesian signal inference
- Probability, propensity and probability of propensity values
- Data analysis for neutron spectrometry with liquid scintillators: applications to fusion diagnostics
- Surrogates
- Bayesian Cross-Identification in Astronomy
- Recent Developments and Applications of Bayesian Data Analysis in Fusion Science
- Large Scale Bayesian Inference in Cosmology
- Bayesian mixture models for background-source separation
- Photometric redshifts
- The uncertain uncertainties in the Galactic Faraday sky
- The Bayesian Analysis Toolkit - a C++ tool for Bayesian inference
- Beyond least squares
- Signal Discovery in Sparse Spectra - a Bayesian Analysis
- Bayesian inference in physics
- FIRST MEETING: Introduction to Bayes Forum and Information field theory - turning data into images
January 11th 2019, 14:00: New Seminar room at MPA: Ulrich Schollwöck (LMU Munich)
Abstract: In this talk I will review Jaynes’ derivation of equilibrium statistical mechanics based on his celebrated principle. I will address certain subtleties arising for the classical and quantum-mechanical treatment respectively and discuss some of the arguments raised against his derivation. talk as pfd
November 16h 14:00: New Seminar room at MPA: Francesca Capel (KTH - Sweden)
Abstract: The study of UHECR is challenged by both the rarity of events and the difficulty in modelling their production, propagation and detection. The physics behind these processes is complicated, requiring high-dimensional models which are impossible to fit to data using traditional methods. I present a Bayesian hierarchical model which enables a joint fit of the UHECR energy spectrum and arrival directions with a physical model of UHECR phenomenology. In this way, possible associations with potential astrophysical sources can be assessed in a physically and statistically principled manner. The importance of including the UHECR energies is demonstrated through simulations and results from the application of the model to data from the Pierre Auger observatory are shown. The potential to extend this framework to more realistic physical models and multi-messenger observations is also discussed.
November 9th 14:00: New Seminar room at MPA: Jakob Knollmüller (MPA Garching)
Abstract: The abstract problem of recovering signals from data is at the core of any scientific progress. In astrophysics, the signal of interest is the Universe around us. It is varying in space, time and frequency and it is populated by a large variety of phenomena. To capture some of its aspects, large and complex instruments are build. How to combine their information consistently into one picture? UBIK allows to fuse data from multiple instruments within one unifying framework. A joint reconstruction of multiple instruments can provide a deeper picture of the same object, as it becomes far easier to distinguish between the signal and instrumental effects. Underlying to UBIK is variational Bayesian inference, allowing uncertainty quantification for parameters jointly. The first incarnation of UBIK demonstrates the versatility of this approach with a number of examples.
October 12th 14:00: New Seminar room at MPA: Tim Sullivan (Free University of Berlin and Zuse Institute Berlin)
Abstract: Numerical computation --- such as numerical solution of a PDE --- can modelled as a statistical inverse problem in its own right. The popular Bayesian approach to inversion is considered, wherein a posterior distribution is induced over the object of interest by conditioning a prior distribution on the same finite information that would be used in a classical numerical method, thereby restricting attention to a meaningful subclass of probabilistic numerical methods distinct from classical average-case analysis and information-based complexity. The main technical consideration here is that the data are non-random and thus the standard Bayes' theorem does not hold. General conditions will be presented under which numerical methods based upon such Bayesian probabilistic foundations are well-posed, and a sequential Monte-Carlo method will be shown to provide consistent estimation of the posterior. The paradigm is extended to computational "pipelines", through which a distributional quantification of numerical error can be propagated. A sufficient condition is presented for when such propagation can be endowed with a globally coherent Bayesian interpretation, based on a novel class of probabilistic graphical models designed to represent a computational work-flow. The concepts are illustrated through explicit numerical experiments involving both linear and non-linear PDE models. talk as pfd
September 21st 14:00, New Seminar room at MPA: Philipp Hennig (University Tübingen)
The computational complexity of inference from data is dominated by the solution of non-analytic numerical problems (large-scale linear algebra, optimization, integration, the solution of differential equations). But a converse of sorts is also true — numerical algorithms for these tasks are inference engines! They estimate intractable, latent quantities by collecting the observable result of tractable computations. Because they also decide adaptively which computations to perform, these methods can be interpreted as autonomous learning machines. This observation lies at the heart of the emerging topic of Probabilistic Numerical Computation, which applies the concepts of probabilistic (Bayesian) inference to the design of algorithms, assigning a notion of probabilistic uncertainty to the result even of deterministic computations. I will outline how this viewpoint is connected to that of classic numerical analysis, and show that thinking about computation as inference affords novel, practical answers to the challenges of large-scale, big data, inference. talk as pfd
September 12st 14:00, New Seminar room at MPA: Johannes Buchner (PUC, Chile)
The data torrent unleashed by current and upcoming astronomical surveys demands scalable analysis methods. Machine learning approaches scale well. However, separating the instrument measurement from the physical effects of interest, dealing with variable errors, and deriving parameter uncertainties is usually an after-thought. Classic forward-folding analyses with Markov Chain Monte Carlo or Nested Sampling enable parameter estimation and model comparison,even for complex and slow-to-evaluate physical models. However, these approaches require independent runs for each data set, implying an unfeasible number of model evaluations in the Big Data regime. Here I present a new algorithm, collaborative nested sampling, for deriving parameter probability distributions for each observation. Importantly,the number of physical model evaluations scales sub-linearly with the number of data sets, and no assumptions about homogeneous errors, Gaussianity, the form of the model or heterogeneity/completeness of the observations need to be made. Collaborative nested sampling has application in speeding up analyses of large surveys, integral-field-unit observations,and Monte Carlo simulations. preprint: arXiv:1707.04476
July 27th: New Seminar room at MPA: Patrick van der Smagt (AI Research, Data:Lab, Volkswagen Group, Munich, Germany)
Abstract: Neural networks have advanced to excellent candidates for probabilistic inference. In combination with variational inference, a powerful tool ensues with which efficient generative models can represent probability densities, preventing the need for sampling. Not only does this lead to better generalisation, but also can such models be used to simulate highly complex dynamical systems. In my talk I will explain how we can predict time series observations, and use those to obtain efficient approximations to optimal control in complex agents. These unsupervised learning methods are demonstrated for time series modelling and control in robotic and other applications.
July 6th 14:00, New Seminar room at MPA: Matthias Niessner (TUM)
In this talk, I will cover our latest research on 3D reconstruction and semantic scene understanding. To this end, we use modern machine learning techniques, in particular deep learning algorithms, in combination with traditional computer vision approaches. Specifically, I will talk about real-time 3D reconstruction using RGB-D sensors, which enable us to capture high-fidelity geometric representations of the real world. In a new line of research, we use these representations as input to 3D Neural Networks that infer semantic class labels and object classes directly from the volumetric input. In order to train these data-driven learning methods, we introduce several annotated datasets, such as ScanNet and Matterport3D, that are directly annotated in 3D and allow tailored volumetric CNNs to achieve remarkable accuracy. In addition to these discriminative tasks, we put a strong emphasis on generative models. For instance, we aim to predict missing geometry in occluded regions, and obtain completed 3D reconstructions with the goal of eventual use in production applications. We believe that this research has significant potential for application in content creation scenarios (e.g., for Virtual and Augmented Reality) as well as in the field of Robotics where autonomous entities need to obtain an understanding of the surrounding environment.
June 29 14:00, Room 313, Max Planck Institute for Physics, Föhringer Ring 6, Munich: Rafael Schick (TUM & MPP)
Evaluating the integral of a density function, e.g. to calculate the Bayes Factor, can computationally be very costly or even impossible, all the more if the density function is complicated in that it is multi-modal or high-dimensional. However, if one has already generated samples from such a density function, e.g. by using a Markov Chain Monte Carlo algorithm, then the harmonic mean integration provides a technique that can directly utilize such samples in order to calculate an integration estimate. By restricting the harmonic mean integration to subvolumes one can avoid the well-known problem of having an infinite variance of the harmonic mean estimator.
June 1st 14:00, Old Seminar room at MPA: Daniel Cremers (TUM)
While numerous low-level computer vision problems such as denoising, deconvolution or optical flow estimation were traditionally tackled with optimization approaches such as proximal methods, recently deep learning approaches trained on numerous examples demonstrated impressive and sometimes superior performance on respective tasks. In my presentation, I will discuss recent efforts to bring together these seemingly very different paradigms, showing how deep learning can profit from proximal methods and how proximal methods can profit from deep learning. This confluence allows to boost deep learning approaches both in terms of drastically faster training times as well as substantial generalization to novel problems that differ from the ones they were trained for (generalization / domain adaptation).
4 May 2018 14:00 New Seminar room at MPA: Nico Piatkowski (University Dortmund)
Abstract: In order to give preference to a particular solution with desirable properties, a regularization term can be included in optimization problems. Particular types of regularization help us to solve ill-posed problems, avoid overfitting of machine learning models, and select relevant (groups of) features in data analysis. In this talk, I demonstrate how regularization identifies exponential family members with reduced computational requirements. More precisely, we will see how to (1) reduce the memory requirements of time-variant spatio-temporal probabilistic models, (2) reduce the arithmetic requirements of undirected probabilistic models, and (3) connect the parameter norm to the complexity of probabilistic inference to derive a new quadrature-based inference procedure.
talk as pdf
23 March 2018: New Seminar room at MPA: Torsten Enßlin and Margret Westerkamp (MPA)
Abstract: The rational solution of the Monty Hall problem unsettles many people. Most people, including us, think it feels wrong to switch the initial choice of one of the three doors, despite having fully accepted the mathematical proof for its superiority. Many people, if given the choice to switch, think the chances are fifty-fifty between their options, but still strongly prefer to stick to their initial choice. Is there some rationale behind these irrational feelings?
preprint
12 January 2018 14:00 New Seminar room at MPA: Mara Salvato (MPE)
Abstract: The increasing number of surveys available at any wavelength is allowing the construction of Spectral Energy Distribution (SED) for any kind of astrophysical object. However, a) different surveys/instruments, in particular at X-ray, UV and MIR wavelength, have different positional accuracy and resolution and b) the surveys depth do not match each other and depending on redshift and SED, a given source might or might not be detected at a certain wavelength. All this makes the pairing of sources among catalogs not trivial, specially in crowded fields. In order to overcome this issue, we propose a new algorithm that combine the best of Bayesian and frequentist methods but that can be used as the common Likelihood Ratio (LR) technique in the simplest of the applications. In this talk I will introduce the code and how it has been use for finding the ALLWISE counterparts to the X-ray ROSAT and XMMSLEW2 All-sky surveys.
talk as pdf
15 December 2017 14:00 New Seminar room at MPA: Sebastian Kehl (MPA)
Abstract: Measurement data used in the calibration of complex nonlinear computational models for the prediction of growth of abdominal aortic aneurysms (AAAs) - expanding balloon-like pathological dilations of the abdominal aorta, the prediction of which poses a formidable challenge in the clinical practice - is commonly available as sequence of clinical image data (MRT/CT/US). This data represents the Euclidian space in which the model is embedded as a submanifold. The necessary observation operator from the space of images to the model space is not straightforward and prone to the incorporation of systematic errors which will affect the predictive quality of the model. To avoid these errors, the formalism of surface currents is applied to provide a systematic description of surfaces representing a natural description of the computational model. The formalism of surface currents furthermore provides a convenient formulation of surfaces as random variables and thus allows for a seamless integration into a Bayesian formulation. However, this comes at an increased computational cost which adds to the complexity of the calibration problem induced by the cost for the involved model evaluations and the high stochastic dimension of the parameter spaces. To this end, a dimensionality reduction approach is introduced that accounts for a priori information given in terms of functions with bounded variation. This approach allows for the solution of the calibration problem via the application of advanced sampling techniques such as Sequential Monte Carlo.
13 October 2017 14:00 New Seminar room at MPA: Manfred Opper (TU Berlin)
Abstract: Variational methods provide tractable approximations to probabilistic and Bayesian inference for problems where exact inference is not tractable or Monte Carlo sampling approaches would be too time consuming. The method is highly popular in the field of machine learning and is based on replacing the exact posterior distribution by an approximation which belongs to a tractable family of distributions. The approximation is optimised by minimising the Kullback—Leibler divergence between the distributions. In this talk I will discuss applications of this method to inference problems for stochastic processes, where latent variables are very high- or infinite dimensional. I will illustrate this approach on three problems: 1) the estimation of hidden paths of stochastic differential equations (SDE) from discrete time observations, 2) the nonparametric estimation of the drift function of SDE and 3) the analysis of neural spike data using a dynamical Ising model.
talk as pdf
22 September 2017: 14:00 New Seminar room at MPA: Licia Verde (Univ. Barcelona)
Abstract: The Bayesian approach has been the standard one in cosmology for the analysis of all major recent datasets. I will present a Bayesian approach to two open issues. One is the tension between different data sets and in particular the value of the Hubble constant inferred from CMB measurement and directly measured in the local Universe. It can be used as a tool to look for systematic errors and/or for new physics. The other one is about neutrino mass ordering. Cosmology, in conjunction with neutrino oscillations results, has already indicated that the mass ordering is hierarchical. A Bayesian approach enables us to determine the odds of the normal vs inverted hierarchy. I will discuss caveats and implications of this.
4 August 2017:Jan Hasenauer (Institute of Computational Biology, Munich)
Abstract: A rigorous assessment of parameter uncertainties is important across scientific disciplines. In this talk, I will introduce profile methods [1] for differential equation models. I will review established optimization-based methods for profile calculation and touch upon integration-based approaches [2] and novel hybrid schemes. The properties of profile likelihoods will be discussed and the relation to Bayesian methods for uncertainty analysis. Finally, I will show a few application examples in the field of systems biology [3].
References:
[1] Murphy & van der Vaart, J. Am. Stat. Assoc., 95(450): 449-485, 2000.
[2] Chen & Jennrich, J. Comput. Graphical Statist., 11(3):714-732, 2002.
[3] Hug et al., Math. Biosci., 246(2): 293-304, 2013.
talk as pdf
21 July 2017 14:00 New Seminar room at MPA: Frederik Beaujean (LMU / Universe Cluster)
Abstract: One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. I show a Bayesian way to estimate the statistical uncertainty for radiative-transfer calculations in which both the number of photon packets and the packet luminosity vary. Our work is motivated by the TARDIS radiative-transfer supernova code developed at ESO and MPA by Wolfgang Kerzendorf et al. Speed is an issue, so I will develop various approximations to the exact expression that are computationally more expedient. Beyond TARDIS, the proposed method is applicable to a wide spectrum of Monte Carlo simulations including particle physics. In comparison to frequentist methods, it is particularly powerful in extracting information when the data are sparse but prior information is available.
14 July 2017 14:00 MPA room 006: Maximilian Totzauer (MPP Munich)
Abstract: In this talk, I will present the results of a global Bayesian analysis of currently available neutrino data. This analysis will put data from neutrino oscillation experiments, neutrinoless double beta decay, and precision cosmology on an equal footing. I will use this setup to evaluate the discovery potential of future experiments. Furthermore, Bayes factors of the two possible neutrino mass ordering schemes (normal or inverted) will be derived for different prior choices. The latter will show that the indication for the normal mass ordering is still very mild and is mainly driven by oscillation experiments, while it does not strongly depend on realistic prior assumptions or on different combinations of cosmological data sets. Future experiments will be shown to have a significant discovery potential, depending on the absolute neutrino mass scale, the mass ordering scheme and the achievable background level of the experiments.
21 April 2017 Friday: Vanessa Boehm (MPA)
Abstract: Tomographic lensing measurements offer a powerful probe for time-resolving the clustering of matter over a wide range of scales. Since full, high resolution reconstructions of the matter field from cosmic shear measurements are ill-constrained, they commonly rely on spatial averages and prior assumptions about the noise and signal statistics. I will present a new, Bayesian method to perform such a reconstruction. Our likelihood can take into account individual galaxy contributions, thereby also accounting for their individual redshift uncertainties. As a prior, we use a lognormal distribution which significantly better captures the non-linear properties of the underlying matter field than the commonly used Gaussian prior. After reviewing the algorithm itself, I will show results from testing it on mock data generated from fully non-linear density distributions. These tests reveal the superiority of the lognormal prior over existing methods in regions with high signal-to-noise.
17 Mar 2017 Friday: Ata Metin (AIP Postsdam)
Abstract: In the current cosmological understanding the clustering and dynamics of galaxies is driven by the underlying dark matter budget. I will present a framework to jointly reconstruct the density and velocity field of the cosmological large-scale structure of dark matter. Therefore, I introduce the ARGO code that is a statistical reconstruction method and will focus on the bias description I use to connect galaxy and dark matter density as well as the perturbative description to correct for redshift-space distortions arising from galaxy redshift surveys. Finally I will discuss the results of the application of ARGO on the SDSS BOSS galaxy catalogue.
10 Mar 2017 Friday: Reimar Leike (MPA)
Abstract: This talk addresses two related topics: 1) Axiomatic information theory 2) Simulation scheme construction 1) Bayesian reasoning allows for high-fidelity predictions as well as consistent and exact error quantification. However, in many cases approximations have to be made in order to obtain a result at all, for example when computing predictions about fields which have degrees of freedom for every point in space. How to quantify the error that is introduced through an approximation? This problem can be phrased as a communication task where it is impossible to communicate a full Bayesian probability through a limited-bandwidth channel. One can request that an abstract ranking function quantifies how 'embarrassing' it is to communicate a different probability. Surprisingly, from very Bayesian axioms it follows that a unique ranking function exist that is equivalent to the Shannon information loss of the approximation. 2) This ranking function can then be applied to information field dynamics, where one has finite data in computer memory that contains information about an evolving field. A simulations scheme is defined uniquely by requiring minimal information loss. This is demonstrated by a working example and the prospects of this new perspective on computer simulations is discussed.
24 Feb 2017 Friday: Fabrizia Guglielmetti (ESO)
Abstract: Image interpretation is an ill-posed inverse problem, requiring inference
theory methods for a robust solution.
Bayesian statistics provides the proper principles of inference to solve
the ill-posedness in astronomical images, enabling explicit declaration on
relevant information entering the physical models. Furthermore, Bayesian
methods require the application of models that are moderately to extremely
computationally expensive. Often, the Maximum a Posteriori (MAP) solution
is used to estimate the most probable signal configuration (and
uncertainties) from the posterior pdf of the signal given the observed
data. Locating the MAP solution becomes a numerically challenging problem,
especially when estimating a complex objective function defined in an
high-dimensional design domain. Therefore, there is the need to utilize
fast emulators for much of the required computations.
We propose to use Kriging surrogates to speed up optimization schemes,
like steepest descent. Kriging surrogate models are built and incorporated
in a sequential optimization strategy. Results are presented with
application on astronomical images, showing the proposed method can
effectively search the global optimum.
talk as pdf
Talks in 2016
2 December 2016 Friday: Steven Gratton (Institute of Astronomy, Cambridge)
Abstract: This talk will introduce a general scheme for computing posteriors in situations in which only a partial description of the data's sampling distribution is available. The method is primarily calculation-based and uses maximum entropy reasoning.
18 November 2016 Friday: David Yu (MPE)
Abstract: We describe the novel paradigm to gamma-ray burst (GRB) location and spectral
analysis, BAyesian Location Reconstruction Of GRBs (BALROG). The Fermi Gamma-ray
Burst Monitor (GBM) is a gamma-ray photon counting instrument. The observed GBM
photon flux is a convolution between the true energy flux and the detector response
matrices (DRMs) of all the GBM detectors. The DRMs depend on the spacecraft
orientation relative to the GRB location on the sky. Precise and accurate spectral
deconvolution thus requires accurate location; precise and accurate location also
requires accurate determination of the spectral shape. We demonstrate that BALROG
eliminates the systematics of conventional approaches by simultaneously fitting for
the location and spectrum of GRBs. It also correctly incorporates the uncertainties
in the location of a transient into the spectral parameters and produces reliable
positional uncertainties for both well-localized GRBs and those for which the
conventional GBM analysis method cannot effectively constrain the position.
talk as pdf
7 October 2016 Friday: Reinhard Prix (Albert-Einstein-Institut, Hannover)
Abstract: Bayesian methods have found a number of uses in the search for
gravitational waves, starting from parameter-estimation problems (the
most obvious application) to detection methods. I will give a short
overview of the various Bayesian applications (that I am aware of)
in this field, and will then mostly focus on a few selected examples
that I'm the most familiar with and that highlight particular challenges
or interesting aspects of this approach.
talk as pdf
16 September 2016 Friday: Ruediger Schack (Royal Holloway College, Univ. London, UK)
Abstract: QBism is an approach to quantum theory which is grounded in the personalist conception of probability pioneered by Ramsey, de Finetti and Savage. According to QBism, a quantum state represents an agent's personal degrees of belief regarding the consequences of her actions on her external world. The quantum formalism provides consistency criteria that enable the agent to make better decisions. This talk gives an introduction to QBism and addresses a number of foundational topics from a QBist perspective, including the question of quantum nonlocality, the quantum measurement problem, the EPR (Einstein, Podolsky and Rosen) criterion of reality, and the recent no-go theorems for epistemic interpretations of quantum states.
29 July 2016 Friday: Henrik Junklewitz (Argelander Institute fuer Astronomie, Bonn)
Abstract: Imaging and data analysis become a more and more pressing issue in modern astronomy, with ever larger and more complex data sets available to the scientist. This is particularly true in radio astronomy, where a number of new interferometric instruments are now available or will be in the foreseeable future, offering unprecedented data quality but also posing challenges to existing data analysis tools. In this talk, I present the growing RESOLVE package, a collection of newly developed radio interferometric imaging methods firmly based on Bayesian inference and Information field theory. The algorithm package can handle the total intensity image reconstruction of extended and point sources, take multi-frequency data into account, and is in the development stage for polarization analysis as well. It is the first radio imaging method to date that can provide an estimate of the statistical image uncertainty, which is not possible with current standard methods. The presentation includes a theoretical introduction to the inference principles being used as well as a number of application examples.
17 June 2016 Friday: Maksim Greiner (MPA)
Abstract: Tomography problems can be found in astronomy and medicinal imaging.
They are complex inversion problems which demand a regularization. I will
demonstrate how a Bayesian setup automatically provides such a regularization
through the prior. The setup is applied to two realistic scenarios: Galactic
tomography of the free electron density and medicinal computer tomography.
talk as pdf
29 April 2016: Kevin H. Knuth (University at Albany (SUNY)
Abstract: A theory of logical inference should be all-encompassing, applying to any subject about
which inferences are to be made. This includes problems ranging from the early applications of
games of chance, to modern applications involving astronomy, biology, chemistry, geology,
jurisprudence, physics, signal processing, sociology, and even quantum mechanics.
This talk focuses on how the theory of inference has evolved in recent history: expanding in scope,
solidifying its foundations, deepening its insights, and growing in calculational power.
talk as pdf
28 April 2016: Additional talk by Kevin H. Knuth at Excellence Cluster
talk as pdf4 April 2016: James Berger (Duke Univ.)
Abstract: Issues of multiplicity in testing are increasingly being encountered in a wide range of disciplines,
as the growing complexity of data allows for consideration of a multitude of possible tests
(e.g., look-elsewhere effects in Higgs discovery, Gravitational wave discovery, etc).
Failure to properly adjust for multiplicities is one of the major causes for the apparently increasing lack of reproducibility in science.
The way that Bayesian analysis does (and sometimes does not) deal with multiplicity will be discussed.
Different types of multiplicities that are encountered in science will also be introduced,
along with discussion of the current status of multiplicity control (or lack thereof).
talk as pdf
4 March 2016: Stephan Hartmann (LMU, Mathematical Philosophy Dept.)
Abstract: Modeling how to learn an indicative conditional ('if A then B') has been a major
challenge for Bayesian epistemologists. One proposal to meet this
challenge is to construct the posterior probability distribution by
minimizing the Kullback-Leibler divergence between the posterior
probability distribution and the prior probability distribution, taking
the learned information as a constraint (expressed as a conditional
probability statement) into account. This proposal has been criticized in
the literature based on several clever examples. In this talk, I will
revisit four of these examples and show that one obtains intuitively
correct results for the posterior probability distribution if the
underlying probabilistic models reflect the causal structure of the
scenarios in question.
talk as pdf
19 February 2016: Allen Caldwell (MPP)
Abstract: There is considerable confusion in our community concerning the definition and
meaning of frequentist confidence intervals and Bayesian credible intervals. We
will review how they are constructed and compare and contrast what they mean and
how to use them. In particular for frequentist intervals, there are differentnext
popular constructions; they will be defined and compared and with the Bayesian
intervals on examples taken from typical situations faced in experimental work.
talk as pdf
18 February 2016: Frederick Beaujean (LMC)
Abstract: There is considerable confusion among physicists regarding the two main interpretations of probability theory. I will review the basics and then discuss some common mistakes to clarify which questions can be asked and how to interpret the numbers that come out.
22 January 2016: Daniel Pumpe (MPA)
Abstract: Stochastic differential equations (SDE) describe well many physical,
biological and sociological systems, despite the simplification often
made in their description. Here the usage of simple SDEs to characterize
and classify complex dynamical systems is proposed within a Bayesian
framework. To this end, the algorithm 'dynamic system classifier' (DSC)
is developed. The DSC first abstracts training data of a system in terms
of time dependent coefficients of the descriptive SDE. This then permits
the DSC to identify unique features within the training data. For
definiteness we restrict ourselves to oscillation processes with a
time-wise varying frequency w(t) and damping factor y(t).
Although real systems might be more complex, this simple oscillating SDE with
time-varying coefficients can capture many of their characteristic
features. The w and y timelines represent the abstract system
characterization and permit the construction of efficient classifiers.
Numerical experiments show that the classifiers perform well even in the
low signal-to-noise regime.
talk as pdf
Talks in 2015
21 December 2015: Jan Leike (Australian National University)
Abstract: Reinforcement learning (RL) is a subdiscipline of machine learning that
studies algorithms that learn to act in an unknown environment through
trial and error; the goal is to maximize a numeric reward signal. We
introduce the Bayesian RL agent AIXI that is based on the universal
(Solomonoff) prior. This prior is incomputable and thus our focus is not
on practical algorithms, but rather on the fundamental problems with the
Bayesian approach to RL and potential solutions.
talk as pdf
11 December 2015: Will Handley (Kavli Institute, Cambridge)
Abstract: PolyChord is a novel Bayesian inference tool for high-dimensional
parameter estimation and model comparison. It represents the latest
advance in nested sampling technology, and is the natural successor to
MultiNest. The algorithm uses John Skilling's slice sampling, utilising
a slice-sampling Markov-Chain-Monte-Carlo approach for the generation of
new live points. It has cubic scaling with dimensionality, and is
capable of exploring highly degenerate multi-modal distributions.
Further, it is capable of exploiting a hierarchy of parameter speeds
present in many cosmological likelihoods.
In this talk I will give a brief account of nested sampling, and the
workings of PolyChord. I will then demonstrate its efficacy by
application to challenging toy likelihoods and real-world cosmology problems.
talk as pdf
PolyChord website
27 November 2015: Hans Eggers (Univ. Stellenbosch; TUM; Excellence Cluster)
Hans Eggers and Michiel de Kock
Abstract: Computational approaches to model comparison are often necessary, but
they are expensive. Analytical Bayesian methods remain useful as
benchmarks and signposts. While Gaussian likelihoods with linear
parametrisations are usually considered a closed subject, more
detailed inspection reveals successive layers of issues. In
particular, comparison of models with parameter spaces of different
dimension inevitably retain an unwanted dependence on prior
metaparameters such as the cutoffs of uniform priors. Reducing the
problem to symmetry on the hypersphere surface and its radius, we
formulate a prior that treats different parameter space dimensions
fairly. The resulting "r-prior" yields closed forms for the evidence
for a number of choices, including three priors from the statistics
literature which are hence just special cases. The r-prior may
therefore point the way towards a better understanding of the inner
mathematical structure which is not yet fully understood. Simple
simulations show that the current interim formulation performs as well
as other model comparison approaches. However, the performance of the
different approaches varies considerably, and more numerical work is
needed to obtain a comprehensive picture.
talk as pdf
16 October 2015: Christian Robert (Universite Paris-Dauphine)
Abstract: Introduced in the late 1990's, the ABC method can be considered from several
perspectives, ranging from a purely practical motivation towards handling complex likeli-
hoods to non-parametric justifications. We propose here a different analysis of ABC techniques
and in particular of ABC model selection. Our exploration focus on the idea that generic machine
learning tools like random forests (Breiman, 2001) can help in conducting model selection among
the highly complex models covered by ABC algorithms. Both theoretical and algorithmic output
indicate that posterior probabilities are poorly estimated by ABC.
I will describe how our research for an alternative first led us to abandoning the use of posterior
probabilities of the models under comparison as evidence tools. As a first substitute, we proposed to
select the most likely model via a random forest procedure and to compute posterior predictive
performances of the corresponding ABC selection method. It is only recently that we realised that
random forest methods can also be adapted to the further estimation of the posterior probability
of the selected model. I will also discuss our recommendation towards sparse implementation of the
random forest tree construction, using severe subsampling and reduced reference tables. The
performances in term of power in model choice and gain in computation time of the resulting ABC-random
forest methodology are illustrated on several population genetics datasets.
talk as pdf
29 September 2015: John Skilling (MEDC)
Abstract: Tomography measures the density of a body (usually by its opacity to X-rays)
along multiple lines of sight.
Body -> X-ray -> Opacity data
From these line integrals, we reconstruct an image of the density as a function of position, p(x).
The user then interprets this in terms of material classification (in medicine, bone or muscle or fat
or fluid or cavity, etc.).
Opacity data -> invert -> Density image -> interpret -> Material model
Bayesian analysis does not need to pass through the intermediate step of density: it can start directly
with a prior probability distribution over plausible material models. This expresses the user's judgement
about which models are thought plausible, and which are not.
Opacity data -> Bayes -> Material model
That prior is modulated by the data (through the likelihood function) to give the posterior distribution of
those images that remain plausible after the data are used.
Usually, only a tiny proportion O(e- size of dataset) of prior possibilities survives into the posterior, so
that Bayesian analysis was essentially impossible before computers. Even with computers, direct search is
impossible in large dimension, and we need numerical methods such as nested sampling to guide the exploration.
But this can now be done, easily and generally. The exponential curse of dimensionality is removed. Starting
directly from material models enables data fusion, where different modalities (such as CT, MRI, ultrasound)
can all be brought to bear on the same material model, with results that are clearer and more informative than
any individual analysis. The aim is to use sympathetic prior models along with modern algorithms to advance the
state of the art.
talk as pdf
24 July 2015: Maria Bergemann (MPIA Heidelberg)
Abstract: Spectroscopic stellar observations have shaped our understanding of stars
and galaxies. This is because spectra of stars are the only way to
determine their chemical composition, which is the fundamental resource to
study cosmic nucleosynthesis in different environments and on different
time-scales. Research in this field has never been more exciting and
important to astronomy: the ongoing and future large-scale stellar
spectroscopic surveys are making gigantic steps along the way towards
high-precision stellar, Galactic, and extra-galactic archaeology.
sHowever, the data we extract from stellar spectra are not
strictly-speaking 'observational'. These data - fundamental parameters and
chemical abundances - heavily rely upon physical models of stars and
statistical methods to compare the model predictions with raw
observations. I will describe our efforts to provide the most realistic
models of stellar spectra, based upon 3D non-local thermodynamic
equilibrium physics, and the new Bayesian spectroscopy approach for the
quantitative analysis of the data. I will show how our new methods and
models transform quantitative spectroscopy, and discuss implications for
stellar and Galactic evolution.
talk as pdf
17 July 2015: Igor Ovchinnikov (UCLA)
Abstract: Prominent long-range phenomena in natural dynamical systems such
as the 1/f and flicker noises, and the power-law statistics observed,
e.g., in solar flares, gamma-ray bursts, and neuroavalanches
are still not fully understood. A new framework sheds light on the
mechanism of emergence of these rich phenomena from the point of
view of stochastic dynamics of their underlying systems. This framework
exploits an exact correspondence relation of any stochastic differential
equation (SDE) with a dual (topological) supersymmetric model. This
talk introduces into the basic ideas of the framework. It will show that
the emergent long-range dynamical behavior in Nature can be
understood as the result of the spontaneous breakdown of the
topological supersymmetry that all SDE's possess. Surprisingly
enough, the concept of supersymmetry, devised mainly for particle
physics, has direct applicability and predictive power, e.g., over the
electrochemical dynamics in a human brain. The mathematics of the
framework will be detailed in a separate lecture series on "Dynamical
Systems via Topological Field Theory" at MPA from July 27th to July 31st.
talk as pdf
26 June 2015: Volker Schmid (Statistics Department of LMU Munich)
Abstract: Compartment models are used as biological model for the analysis of a variety of imaging data. Perfusion imaging, for example, aims to investigate the kinetics in human tissue in vivo via a contrast agent. Using a series of images, the exchange of contrast agent between different compartment in the tissue over time is of interest. Using the analytic solution of system of differential equations, nonlinear parametric functions are gained and can be fitted to the data at a voxel level. However, the estimation of the parameters is unstable, in particular in models with more compartments. To this end, we use Bayesian regularization to gain stable estimators along with credible intervals. Prior knowledge is determined about the context of the local compartment models. Here, context can refer to either spatial information, potentially including edges in the tissue. Additionally patient or study specific information can be used, in order to develop a comprehensive model for the analysis of a set of images at once. I will show the application of fully Bayesian multi-compartment models in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and fluorescence recovery after photo bleaching (FRAP) microscopy. Additionally, I will discuss some alternative non-Bayesian approaches using the same or similar prior information.
19 June 2015: Emille Ishida (MPA)
Abstract: Approximate Bayesian Computation (ABC) enables parameter
inference for complex physical systems in cases where the true
likelihood function is unknown, unavailable, or computationally too
expensive. It relies on the forward simulation of mock data and comparison between
observed and synthetic catalogues. In this talk I will go through the basic principles
of ABC and show how it has been used in astronomy recently. As a practical example,
I will present ``cosmoabc'', a Python ABC sampler featuring a Population Monte Carlo
variation of the original ABC algorithm, which uses an adaptive importance sampling scheme.
talk as pdf
Paper: Ishida et al, 2015
Code
Code
Documentation
8 May 2015: Allen Caldwell (MPP)
Abstract: Deciding whether a model provides a good description of data is often based on a
goodness-of-fit criterion summarized by a p-value. Although there is considerable
confusion concerning the meaning of p-values, leading to their misuse, they are
nevertheless of practical importance in common data analysis tasks. We motivate
their application using a Bayesian argumentation. We then describe commonly and less
commonly known discrepancy variables and how they are used to define p-values. The
distribution of these are then extracted for examples modeled on typical data
analysis tasks, and comments on their usefulness for determining goodness-of-fit are
given. (based on paper by Frederik Beaujean, Allen Caldwell, Daniel Kollar, Kevin Kroeninger.)
talk as pdf
24 April 2015: Udo von Toussaint (IPP)
Abstract: The quantification of uncertainty for complex simulations is of increasing importance. However, standard approaches like Monte Carlo sampling become infeasible quickly. Therefore Bayesian and non-Bayesian probabilistic uncertainty quantification methods like polynomial chaos (PC) expansion methods or Gaussian processes have found increasing use over the recent years. This presentation focuses on the concepts and use of non-intrusive collocation methods for the propagation of uncertainty in computational models using illustrative examples as well as real-world problems, ie. the Vlasov-Poisson equation with uncertain inputs.
27 February 2015: Maximilian Imgrund (USM/LMU Munich and MPIfR Bonn)
Abstract: Pulsars' extremely predictable timing data is used to detect signals from fundamental physics such as the general theory of relativity.To fit for actual parameters of interest, the rotational phase of these compact and fast rotating objects is modeled and then matched with observational data by using the times of arrival (ToAs) of a certain rotational phase tracked over years of observations. While there exist Bayesian methods to infere the parameters' pdfs from the ToAs, the standard methods to generate these ToAs relies on simply averaging the data received by the telescope. Our novel Bayesian method acting on the single pulse level both may give more precise results and a more accurate error estimation.
30 January 2015: Torsten Ensslin (MPA)
Abstract: How to optimally describe an evolving field on a finite computer? How to assimilate measurements and statistical knowledge about a field into a simulation? Information field dynamics is a novel conceptual framework to address such questions from an information-theoretical perspective. It has already proven to provide superior simulation schemes in a number of applications.
Talks in 2014
19 December 2014: Max Knoetig (Institute for Particle Physics, ETH Zuerich)
Abstract: Discusses the on/off problem that consists of two
counting measurements, one with and one without a possible
contribution from a signal process. The task is to infer the strength
of the signal; e.g., of a gamma-ray source. While this sounds pretty
basic, Max will present a first analytical Bayesian solution valid for
any count (incl. zero) that supersedes frequentist results valid only
for large counts. The assumptions he makes have to be reconciled with
our intuition of the foundations of Bayesian probability theory.
talk as pdf
24 October 2014: Frederik Beaujean (LMU)
Abstract: In measurements of rare B-meson decays experiments often see only few, say 100, interesting events in their detector. This may not be sufficient to determine the full angular distribution with 10 or more parameters - the object of interest to new-physics searches - in a likelihood fit. I will discuss the somewhat surprising result that the frequentist method of moments can be used to better convey the information in the data into Bayesian global fits. In the second part, I will briefly discuss how to perform such fits with a new sampling algorithm that Stephan Jahn and I recently developed. It is based on importance sampling and variational Bayes, and can be used to samp from and compute the evidence of multimodal distributions in up to 40-50 dimensions, while most of the (costly) likelihood evaluations can be done in parallel. We just released the first version of pypmc, a python package that implements the algorithm.
23 May 2014: Johannes Buchner (MPE)
Abstract: Bayesian inference provides a all-purpose recipe to draw parameter and
model inferences based on knowledge of the stochastic processes at work.
But is the used model the correct one (model verification)? Have I found
all relevant effects in my data (model discovery)? Is it more reliable
to do Bayesian model comparison than likelihood ratio tests or BIC/AIC,
which are much simpler to compute?
In my recently submitted paper, we introduce Bayesian parameter
estimation and model selection for X-ray spectral analysis. As an
application, we infer about the geometry of the "torus" in obscured AGN
spectra within the Chandra Deep Field South, using 10 physically
motivated models. The data is strong enough to rule out 7 of them in
individual spectra, and another two if we combine all spectra. In this
manner, we find that a obscuring toroid with a finite opening angle is
preferred over a entirely closed or disk-like obscurer. Additionally, we
find a dense reflection component in obscured AGN, possibly the
accretion disk.
My work does not stop at just applying model selection. The results are
vetted against outliers from individual spectra. We numerically compare
the frequentist properties of model selection methods (evidence,
likelihood-ratio, AIC/BIC) and look at the p-values associated with
different model priors. We showcase the QQ-plot in combination with the
AIC as a generic tool for model discovery. And finally, we show that in
our problem, multi-modal parameter spaces cause commonly used ML fitting
methods to underestimate parameter uncertainties.
talk as pdf
associated paper
4 April 2014: Rafael de Souza (Korea Astronomy and Space Science Institute)
Abstract: I will present a set of tools for mining multidimensional datasets. In particular, an application to the Millennium-II Simulation dataset, where we automatically recover all groups of linear and non-linear associations within a set of ~30 parameters from thousands of galaxies. Also, I will show alternative ways to visualize general correlations in huge datasets and how dimensionality reduction techniques and graph theory can help us to understand the redshift evolution of galaxy properties. Finally I will give a short introduction of the recently created working group of cosmostatistics within the international astrostatistics association.
25 Mar 2014: Paul Sutter (IAP Paris)
Abstract: I will present a new, general-purpose method for reconstructing images from radio interferometric observations using the Bayesian method of Gibbs sampling. The method automatically takes into account incomplete coverage and mode coupling. Using a set of mock images with realistic observing scenarios I will show that this approach performs better than traditional methods. In addition, Gibbs sampling scales well and provides complete statistical information. I will discuss the application of this method to upcoming 21-cm and CMB polarization experiments.
27 February 2014: Bernd Noack (PPRIME Institute, Univ. Poitiers)
Abstract: Fluid turbulence is an ubiqious phenomenon in nature and technology.
Lungs, for instance, need turbulence for effective transport of Oxygen and
COx. Other examples include rivers, atmospheric boundary layers and the
corona of the sun. Most technologically relevant flows are turbulent:
flows in oil pipelines, combustors and mixers as well as flows around
cars, trains, ships and airplanes.
The manipulation of turbulent flows is an important means for engineering
optimization. Examples are drag reduction of cars and trucks, lift
increase of wings, increase of pressure recovery in diffusers, and
efficiency increase of wind and water energy. One rapidly evolving
technique towards this goal is closed-loop turbulence control. Here, the
flow is manipulated with actuators which may, for instance, blow or suck
air, sensors which monitor the flow state, and a control logic feeding
back back sensor information into efficient actuation commands.
Closed-loop turbulence control has applications of epic proportion for
optimization of aerodynamic forces, noise and mixing processes.
The key mathematical challenge and control opportunity is the strong
nonlinearity of the actuation response. In this talk, we present examples
of such nonlinear control effects and propose a modelling and control
strategy building on reduced-order models and maximum entropy principles.
MaxEnt principles guide (1) low-dimensional representations for the
large-scale coherent structures, (2) low-dimensional dynamical models for
their temporal evolution, and (3) closure schemes for the statistical
moments and (4) control laws exploiting nonlinear actuation response. We
present turbulence control examples hithertoo not accessible by
state-of-the-art strategies or significantly outperforming them.
talk as pdf
Talks in 2013
13 December 2013: Sebastian Dorn (MPA)
Abstract: An error-diagnostic validation method for posterior distributions (DIP test) in Bayesian signal inference is presented.
It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this
purpose. I show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their
impact on the posterior distribution. I further illustrate the DIP test with help of analytic examples and an application to an actual problem in
precision cosmology.
talk as pdf
15 November 2013: Simona Vegetti (MPA)
Abstract: Strong gravitational lens systems with extended sources are particularly
interesting because they provide additional constraints on the lens mass distribution.
In order to infer the lens properties one needs to simultaneously determine the lens
potential and the source surface brightness distribution by modeling the lensed images.
In this talk, I will present a linear inversion technique to reconstruct pixellated
sources and lens potentials. This technique makes use of Bayesian analysis to determine
the best lens mass model parameters and the best level of source regularization.
Finally, I will discuss the MultiNest technique and how this is used to compare
different lens models.
talk as pdf
26 July 2013: David Hogg (MPIA Heidelberg)
Abstract: I will show a few applications -- some real and some toys -- that demonstrate
that there are astrophysics questions that can be answered using hierarchical Bayesian
inference better than they can be answered in any other way.
In particular I will talk about problems in which there are many noisily observed instances
(eg, exoplanets or quasars) and the goal is to infer the distribution from which the (noiseless)
instances are drawn, and problems in which the goal is to make predictions for high
signal-to-noise data when only low signal-to-noise data are at hand.
I will particularly emphasize that hierarchical methods can combine information from multiple
noisy data sets even in situations in which the popular methods of "stacking" or "co-adding"
necessarily fail.
talk as pdf
7 June 2013: Alex Szalay (Johns Hopkins University)
Abstract: The talk will describe how Breiman's Random Forest technique provides a simple
and elegant approach to photometric redshifts. We show through a simple toy model how the
estimator's uncertainty can be quantified, and how the uncertainty scales with sample size,
forest size and sampling rate. We then show how the RF is really a computationally convenient
form of a kernel density estimator, and as such maps unto Bayesian techniques extremely
naturally.
talk as pdf
19 April 2013: Mikko Tuomi (University of Hertfordshire, UK)
Abstract: The Bayes' rule of conditional probabilities can be used to construct a data analysis
framework in the broadest possible sense. No assumptions remain hidden in the methodology and
the results can be presented solely conditioned on the selected prior and likelihood models.
When searching for small fingerprints of low-mass planets orbiting nearby stars, the optimality
of statistical techniques is of essence. We apply such techniques to precision radial velocities
of nearby M dwarfs and reveal the existence of a diverse population of low-mass planets right
next door.
talk as pdf
22 March 2013: Marco Selig (MPA)
Abstract: Although, a large number of Bayesian methods for 1D signal reconstruction, 2D imaging,
as well as 3D tomography, appear formally similar, one often finds individualized implementations
that are neither flexible nor transferable. The open source Python library "NIFTY" ("Numerical
Information Field Theory": http://www.mpa-garching.mpg.de/ift/nifty) allows its user to write signal reconstruction
algorithms independent of the underlying spatial grid and its resolution. In this talk, a number of 1D, 2D, 3D and
spherical examples of NIFTY's versatile application areas that involve astrophysical, medical, and abstract inference
problems are presented. Those include the use of spectral smoothness priors, the tomography of the galactic electron
density, and more.
talk as pdf
6 February 2013: Giulio D'Agostini (Univ. Rome 'La Sapienza')
Abstract: Assuming the attendants to the MPE Bayesian Forum familiar with "Bayesian computation", I will focus
on some foundamental questions. We shall play a little interactive game, supported by a Bayesian Network
implemented in Hugin, from which some of the basic, debated issues concerning probability, about which there is disagreement
even among `Bayesians', will arise spontaneously.
talk as pdf
25 January 2013: Marcel Reginatto (Physikalisch-Technische Bundesanstalt, Braunschweig)
Abstract: Neutron spectrometry is a useful diagnostic tool for fusion science. Measurements of the neutron energy spectrum provide information about the
reactions that take place in the plasma, information that is relevant to understanding the consequences of choosing different experimental setups; e.g.,
when evaluating the effectiveness of different heating schemes.
In this talk, I will focus on neutron spectrometry with liquid scintillators. The analysis of such spectrometric measurements is not straightforward,
in part because the neutron energy spectrum is not measured directly but must be inferred from the data provided by the spectrometer.
It requires both the use of methods of data analysis that are well suited to the questions that are being asked and an understanding of the energy resolution
achievable when these methods are used under particular measurement conditions.
After a short overview of neutron spectrometry with liquid scintillation detectors, I will present an analysis of the resolving power and the superresolution
that is theoretically achievable with this spectrometer, based on the work of Backus and Gilbert and of Kosarev.
I will then discuss the analysis of measurements made at the Physikalisch-Technische Bundesanstalt (PTB) accelerator facility
and at the Joint European Torus (JET), in particular the application of maximum entropy deconvolution and Bayesian parameter estimation to these data.
talk as pdf
Talks in 2012
7 December 2012: Christoph Raeth (MPE Garching)
Abstract: The method of surrogates represents one of the key concepts of nonlinear data analysis. The technique that has originally been developed
for nonlinear time series analysis allows to test for weak nonlinearities in data sets in a model-independent way.
The method has found numerous applications in many fields of research ranging from geophysical and physiological time series analysis to econophysics
and astrophysics. In the talk the basic idea of this approach and the most common ways to generate surrogates are reviewed.
Further, more recent applications of the method of surrogates in the field of cosmology, namely for testing for scale-dependent non-Gaussianities
in maps of the cosmic microwave background (CMB) are outlined. Open questions about a possible combination of the surrogate approach
and Bayesian-inspired component separation techniques for further improvements of the CMB-map-making procedure will be pointed out for a common discussion.
talk as pdf
7 November 2012: Tamas Budavari (Johns Hopkins University)
Abstract: The cross-identification of objects in separate observations is one of the most fundamental problems in astronomy. Scientific analyses typically
build on combined, multicolor and/or multi-epoch datasets, and heavily rely on the quality of their associations. Cross-matching, however, is a
hard problem both statistically and computationally. We will discuss a probabilistic approach, which yields intuitive and easily calculable
formulas for point sources, but also generalizes to more complex situations. It naturally accommodates sophisticated physical and
geometric models, such as that of the spectral energy distribution of galaxies, radio jets or the proper motion of stars. Building on this new
mathematical framework, new tools are being developed to enable automated associations.
talk as pdf
12 October 2012: Rainer Fischer (IPP Garching)
Abstract: In fusion science a large set of measured data have to be analysed to obtain most reliable results.
This data set usually suffers from statistical as well as systematic uncertainties,
from lack of information, from data inconsistencies, and from interdependencies of the various physical
models describing the data from the heterogeneous measurements. A joint analysis of the combined data sets allows
to improve the reliability of the results, to improve spatial and temporal resolution, to achieve synergies, and to
find and resolve data inconsistencies exploiting the complementary and redundancy of the various measured data.
The concept and benefits of Integrated Data Analysis in the framework of Bayesian probability theory
will be shown with examples from various diagnostics measurements at ASDEX Upgrade.
talk as pdf
11 September 2012: Jens Jasche (IAP, Paris)
Abstract: Already the last decade has witnessed unprecedented progress in the collection of cosmological data.
Presently proposed and designed future cosmological probes and surveys permit us to anticipate the upcoming
avalanche of cosmological information during the next decades. The increase of valuable observations needs
to be accompanied with the development of efficient and accurate information processing technology in order
to analyse and interpret this data. In particular, cosmography projects, aiming at studying the origin and
inhomogeneous evolution of the Universe, involve high dimensional inference methods. For example, 3d cosmological
density and velocity field inference requires to explore on the order of 10^7 or more parameters. Consequently,
such projects critically rely on state-of-the-art information processing techniques and, nevertheless,
are often on the verge of numerical feasibility with present day computational resources. For this reason,
in this talk I will address the problem of high dimensional Bayesian inference from cosmological data sets,
subject to a variety of statistical and systematic uncertainties. In particular, I will focus on the discussion
of selected Markov Chain Monte Carlo techniques, permitting to efficiently solve inference problems with on
the order of 10^7 parameters. Furthermore, these methods will be exemplified in various cosmological applications,
raging from 3d non-linear density and photometric redshift inference to 4d physical state inference. These techniques
permit us to exploit cosmologically relevant information from observations to unprecedented detail and hence will
significantly contribute to the era of precision cosmology.
talk as pdf
6 July 2012: Fabrizia Guglielmetti (MPE Garching)
Abstract: A method to solve the long-lasting problem of disentanglement of the background from the sources has been developed using Bayesian mixture modelling.
The technique employs a joint estimate of the background and detection of the sources in astronomical images.
Since the interpretation of observational data implies the solution of an ill-posed inverse problem,the technique makes use of Bayesian probability
theory to ensure that the solution is stable, unique and close to the exact solution of the inverse problem. The technique is general and applicable to any counting detector.
So far, this technique has been applied to X-ray data from ROSAT and Chandra and it is under a feasibility study for the forthcoming eROSITA mission.
talk as pdf
15 June 2012: Roberto Saglia (MPE Garching)
Abstract: Wide-field surveys of the sky like SDSS, PanSTARRS or DES provide multi-band photometry of millions of galaxies down to faint
magnitudes. Redshifts derived from this kind of datasets, i.e. photometric redshifts, deliver galaxy distances well beyond the limits of
spectroscopic surveys. The ESA mission EUCLID will rely on photometric redshifts (combined with lensing measurements) to constrain the
properties of Dark Energy. I will review the current methods of photometric redshifts determinations. Empirical strategies like polynomial
fitting, kernel regression, support vector machines, and especially artificial neural networks deliver excellent results, but require
extensive spectroscopic training sets down to faint magnitudes. Template fitting in the Bayesian framework can provide a competitive
alternative, especially for the class of luminous red galaxies.
talk as pdf
25 May 2012: Niels Oppermann (MPA Garching)
Abstract: The reconstruction of spatially or temporally distributed signals from a data set can benefit from using information on the
signal's correlation structure. I will present a method, developed within Information Field Theory, that uses this correlation information, even
though it is not a priori available. The method is furthermore robust against outliers in the data since it allows for errors
in the observational error estimate. The performance of the method will be demonstrated on an astrophysical map-making problem,
the reconstruction of an all-sky image of the Galactic Faraday depth.
talk as pdf
13 April 2012: Kevin Kroeninger (Universitaet Goettingen)
Abstract: Bayesian inference is typically used to estimate the values of free parameters of a model, to test the validity of the model under study
and to compare predictions of different models with data. The Bayesian Analysis Toolkit, BAT, is a software package which addresses the
points above. It is designed to help solve statistical problems encountered in Bayesian inference. BAT is based on Bayes' Theorem and
is realized with the use of Markov Chain Monte Carlo. This gives access to the full posterior probability distribution and enables
straightforward parameter estimation, limit setting and uncertainty propagation.
BAT is implemented in C++ and allows for a flexible definition of mathematical models and applications while keeping in mind the
reliability and speed requirements of the numerical operations. It provides a set of algorithms for numerical integration, optimization
and error propagation. Predefined models exist for standard cases. In addition, methods to judge the goodness-of-fit of a model are
implemented. An interface to ROOT allows for further analysis and graphical display of results. BAT can also be run from within RooStats
analysis.
talk as pdf
16 March 2012: Volker Dose (IPP Garching)
Abstract: We investigate in a Bayesian framework the performance of two alternative modifications
of the 200 years old method of least squares. The first modification considers arbitrary real
positive exponents alpha instead of alpha=2 in the distance measure. This modification leads to estimates
that are less outlier sensitive than traditional least squares. Moreover, even when data are simulated
with a Gauss random number generator the optimum exponent alpha may well deviate from alpha=2.
the second modification consists of abandoning the assumption that data uncertainties entering the
distance measure are exact. We replace this assumption by assuming that the experimentally determined
uncertainties Si are point estimates of the unknown true uncertainties sigma i. The remarkable
result of this modification is a likelihood which is, unlike traditional least squares, perfectly robust
against outliers in case of inconsistent data, but approaches least squares results for consistent data.
These properties render data selection by reason of their numerical value unnecessary.
talk as pdf
10 February 2012: Allen Caldwell (Max Planck Institute for Physics, Munich)
Abstract: A Bayesian analysis of the probability of a signal in the presence of background is developed. The method is general and, in particular,
applicable to sparsely populated spectra. As a concrete example, we consider the search for neutrinoless double beta decay with the GERDA
experiment.
talk as pdf
25 January 2012: Udo von Toussaint (IPP Garching)
Abstract: Bayesian inference provides a consistent method for the extraction of information from physics experiments
even in ill-conditioned circumstances. The approach provides a unified rationale for data analysis, which both justifies
many of the commonly used analysis procedures and reveals some of the implicit underlying assumptions.
The presentation introduces the general ideas of the Bayesian probability theory with emphasis on the application to the evaluation of experimental data.
As case studies for Bayesian estimation techniques examples ranging from the deconvolution of the apparatus functions for improved energy resolution to change
point estimation in time series are discussed. Key numerical techniques suited for Bayesian analysis are presented with a focus on recent developments of
Markov Chain Monte Carlo (MCMC) algorithms for high-dimensional integration problems. Additionally the Bayesian inference techniques for the design and
optimization of future experiments are introduced: Experiments, instead of being merely passive recording devices, can now be designed to adapt to measured
data and to change the measurement strategy on the fly to maximize the information of an experiment. The applied key concepts and necessary numerical tools
which provide the means of designing such inference chains and the crucial aspects of data fusion are presented and areas of ongoing research are
highlighted.
talk as pdf
12 December 2011: Fabrizia Guglielmetti and Torsten Ensslin (MPE Garching)
Bayes Introduction - talk as pdfAbstract: Astronomical image reconstruction, cosmography of the large-scale-structure and investigations
of the CMB have to deal with complex inverse problems for spatial distributed quantities. To tackle such
problems in a systematic way, I present information field theory (IFT) as a means of Bayesian, data based
inference on spatially distributed signals. IFT is a statistical field theory, which permits the
construction of optimal signal recovery algorithms. The theory can be used in many fields, and a
few applications in cosmography, CMB research, and cosmic magnetism studies are presented.
IFT - talk as pdf