Skip to main content
Department of Information Technology

Learning in high-dimensional models

Inference in complex, e.g., nonlinear and non-Gaussian, and high-dimensional statistical models is a challenging problem that is ubiquitous in applications. Furthermore, SMC methods, which are among the most successful methods for general sequential Bayesian inference, struggle in high dimensions and are rarely used for dimensions d>10. Making SMC a viable tool for automated inference in high-dimensional models requires the formalization and automation of complexity reducing techniques.

One approach to extend SMC to higher dimensions is to use sequences of intermediate target distributions of increasing complexity in tailored SMC samplers. Preliminary results for spatio-temporal models and for probabilistic graphical models demonstrate the feasibility of such an approach, also when there is no obvious sequential structure inherent in the problem. However, these results rely on carefully engineered sequences of target distributions, and automating the selection of target distributions is essential for the viability of these methods within a probabilistic model compiler.

The adaptation of generic numerical optimization techniques to utilize model structures for computational efficiency is yet another area that requires automation to be viable in a probabilistic model compiler. Common to all the above areas is that they jointly require proper levels of abstractions and the formalization of engineering insights and know-how.

Specific objectives of this research theme:

  1. Develop new competitive algorithms and the accompanying analysis for inference and learning in high-dimensional models.
  2. Find appropriate abstractions and formalizations that allow the use of these methods in a probabilistic model compiler.
Updated  2017-05-24 16:02:40 by Thomas Schön.