Direkt zum Seiteninhalt springen

Fortuin team

EFFICIENT LEARNING AND PROBABILISTIC INFERENCE FOR SCIENCE

Vincent Fortuin - Group leader

COMBINING BAYERSIAN INFERENCE WITH POWERFUL DEEP LEARNING MODELS FOR CHALLENGING APPLICATIONS

Vincent Fortuin’s Helmholtz AI young investigator group, which is located at the Helmholtz Center Munich, focuses on the interface between Bayesian inference and deep learning with the goals of improving robustness, data-efficiency, and uncertainty estimation in these modern machine learning approaches. While deep learning often leads to impressive performance in many applications, it can be over-confident in its predictions and require large datasets to train. Especially in scientific applications, where training data is scarce and detailed prior knowledge is available, insights from Bayesian statistics can be used to drastically improve these models. Important research questions include how to effectively specify priors in deep Bayesian models, how to harness unlabeled data to learn re-usable representations, how to transfer knowledge between tasks using meta-learning, and how to guarantee generalization performance using PAC-Bayesian bounds.

Visit Vincent Fortuin's personal website →

Research lines

  • Bayesian deep learning: Combining Bayesian inference with deep neural networks promises great advantages but still poses many challenges regarding effective prior specification and efficient inference.
  • Deep generative modeling: While deep generative models have achieved impressive performance on images and text, it is still unclear how to use them most effectively on scientific data, where datasets are much smaller but more specific prior knowledge is available.
  • Meta-learning: Most tasks in real-world applications, including in science, are not solved in isolation but in the context of related similar tasks. How to transfer knowledge between these tasks most effectively remains an impactful research question.
  • PAC-Bayesian theory: While overfitting is a constant threat to machine learning models, PAC-Bayesian bounds can provide probabilistic guarantees on the generalization performance and thus enable more robust and trustworthy models for critical applications.

Publications and projects