Probabilistic model compiler
This research theme focuses on the problem of defining domain-specific modeling languages, where both probabilistic models and novel inference methods can easily and efficiently be implemented. In particular, a main objective is to make a clear separation between the probabilistic model and the inference algorithm, and then to automatically compile them together into an efficient solution. A key problem is to find the right set of model abstractions that can be adopted and used by non machine learning experts, and still are expressive enough for describing various complex application domains. We intend to strongly focus on the demonstrators and their demands of expressiveness and analyzability.
A key aspect of this research theme is to create a language environment that enables a market place for both inference algorithms and model libraries. That is, machine learning experts should be able to easily release their latest algorithms within the tool chain, with the objective to make it extremely easy for engineers in industry to utilize the latest machine learning algorithms.
Besides defining expressive, analyzable, and formal probabilistic modeling abstractions, an important research challenge for this theme is to develop model compiler techniques that enable highly efficient inference. We intend to investigate and utilize several techniques that are well established within the programming language research community.
Specific objectives of this research theme:
- Define clear and formal probabilistic model abstractions with both high expressiveness and analyzabity, directly suitable for industrial adoption.
- Develop new compiler techniques and algorithms for automatic selection and specialization of model dependent inference algorithms.
- Develop complete open source software tool chains that enable engineers to define declarative models that are automatically compiled into efficient machine learning solutions.