Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence
print


Breadcrumb Navigation


Content

About

The research interests of the Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence focus on the intersection of mathematics and artificial intelligence, aiming for both a mathematical understanding of artificial intelligence and artificial intelligence for mathematical problems.

Artificial intelligence is currently changing industry, public life, and science in an unprecedented way. The current work horse of artificial intelligence are deep neural networks, which are designed to mimic the human brain. While we witness spectacular applications on a daily basis, most of the related research is empirically driven and a mathematical foundation is still almost completely missing. In addition, various cases can be easily found where these techniques dramatically fail under small perturbations such as adversarial examples in image classification. Thus, there is a great demand for a mathematical understanding and – based on this - improvements of deep learning-based methods.

Deep learning can be regarded as a classical learning problem. Deriving a profound understanding of the action of a deep neural network requires the study of three research directions: The area of expressivity asks about the approximation power of specific network architectures, aiming to replace the combersome trial and error search for a suitable architecture for a specific application by a theory-guided choice. The research direction of learning aims to analyze the training of a network in the sense of whether and how fast a training algorithm converges to the correct solution. The question complex of generalization targets the ability of the neural network to correctly handle data it has never seen before, which in a sense is the most challenging area. A very new research direction, coined explainability, assumes that we are already given a trained neural network and aims to understand how it reaches decisions in the sense of “opening the black box”. In this regime, also questions about robustness and fairness are of key importance for ensuring a safe and efficient application of deep neural networks. Intriguingly, those research directions require a multitude of mathematical methods, ranging from approximation theory and functional analysis over optimization to statistical learning theory and uncertainty quantification. This leads to fascinating theoretical research questions, which we study in our chair.

A similarly strong impact can be observed on science itself. Deep learning based approaches have proven very useful within certain problem settings, in particular, for solving ill-posed inverse problems, predominantly from imaging science such as computed tomography, sometimes already leading to state-of-the-art algorithms. Lately also more and more successes for solving partial differential equations have been reported. Typically, best performances can be observed when model-based approaches are combined with - and not entirely replaced by - deep learning methods, which we also strongly advocate in our chair. It is generally believed that we currently witness a substantial paradigm change in the entire field of applied mathematics, which in our chair we fully embrace and pursue.