Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence
print


Breadcrumb Navigation


Content

Explainability of Deep Neural Networks

Type

Master’s thesis / Bachelor’s thesis / guided research

Prerequisites

  • Knowledge of deep learning with image data, natural language data, or graph data
  • Proficiency with Python and deep learning frameworks (either PyTorch or Tensorflow)

Description

Over the last decade, deep learning methods have been deployed in numerous real-world, often safety-critical, applications. However, a major and growing concern remains the explainability of neural network decisions. A neural network operates as a black box: Apriori, one can only comprehend the input and output of a neural net decision, not the reasoning leading to the decision. The explainable AI (XAI) field aims to develop explanation methods that "open the black box" and shed light on the reasoning behind neural network decisions.

References