K. Kersting: Making deep neural networks right for the right scientific reasons


Bio Information

Kristian Kersting is a Full Professor at the Computer Science Department of the TU Darmstadt University, Germany, heading the  Artificial Intelligence and Machine Learning (AIML) lab. After receiving his Ph.D. from the University of Freiburg in 2006, he was with the MIT, Fraunhofer IAIS, the University of Bonn, and the TU Dortmund University. His main research interests are statistical relational artificial intelligence (AI) and deep (probabilistic) programming, and deep probabilistic learning. Kristian has published over 180 peer-reviewed technical papers and co-authored a book on statistical relational AI. Kristian is a Fellow of the European Association for Artificial Intelligence (EurAI), a Fellow and Faculty of the European Laboratory for Learning and Intelligent Systems (ELLIS), and a key supporter of the Confederation of Laboratories for Artificial Intelligence in Europe (CLAIRE).

Kristian received the Inaugural German AI Award (Deutscher KI-Preis) 2019, accompanied by a prize of EURO100.000, as well as several best paper,, a Fraunhofer Attract research grant with a budget of 2.5 Million Euro, and the EurAI (formerly ECCAI) AI Dissertation Award 2006 for the best Ph.D. thesis in the field of Artificial Intelligence in Europe. He is (past)  co-chair of the scientific program committees of UAI 2017, ECML PKDD 2013 and ECML PKDD 2020.

He is the founding Editor-in-Chief of Frontiers in Machine Learning and AI and is (past) action editor of IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Journal of Artificial Intelligence Research (JAIR), Artificial Intelligence Journal (AIJ), Data Mining and Knowledge Discovery (DAMI), and Machine Learning Journal (MLJ).


Presentation Abstract

Deep neural networks have shown excellent performances in many real-world applications such as plant phenotyping. Unfortunately, they may show “Clever Hans”-like behaviour, making use of confounding factors within datasets, to achieve high prediction rates. Rather than discarding the trained models or the dataset, we show that interactions between the learning system and the human user can correct the model.

Specifically, we revise the models decision process by adding annotated masks during the learning loop and penalize decisions made for wrong reasons. In this way the decision strategies of the machine can be improved, focusing on relevant features, without considerably dropping predictive performance.

Authors: Kristian Kersting, Patrick Schramowski, Wolfgang Stammer, Stefano Teso, Anna Brugger, Franziska Herbert, Xiaoting Shao, Hans-Georg Luigs, and Anne-Katrin Mahlein


Video

Ask the Speaker

You can post your questions for the speaker of this talk here until Nov 8. Your questions are visible to all other registered participants of the conference and will be visible together with your username that consists of the beginning of your email address. Participants can “like” your question. Questions with more like are more likely to be discussed during the live Q&A session on November 10, 2020.