default search action
Explainable AI 2019
- Wojciech Samek, Grégoire Montavon, Andrea Vedaldi, Lars Kai Hansen, Klaus-Robert Müller:
Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Lecture Notes in Computer Science 11700, Springer 2019, ISBN 978-3-030-28953-9
Part I Towards AI Transparency
- Wojciech Samek, Klaus-Robert Müller:
Towards Explainable Artificial Intelligence. 5-22 - Adrian Weller:
Transparency: Motivations and Challenges. 23-40 - Lars Kai Hansen, Laura Rieger:
Interpretability in Intelligent Systems - A New Concept? 41-49
Part II Methods for Interpreting AI Systems
- Anh Nguyen, Jason Yosinski, Jeff Clune:
Understanding Neural Networks via Feature Visualization: A Survey. 55-76 - Seunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee:
Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation. 77-95 - Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama:
Unsupervised Discrete Representation Learning. 97-119 - Seong Joon Oh, Bernt Schiele, Mario Fritz:
Towards Reverse-Engineering Black-Box Neural Networks. 121-144
Part III Explaining the Decisions of AI Systems
- Ruth Fong, Andrea Vedaldi:
Explanations for Attributing Deep Neural Network Predictions. 149-167 - Marco Ancona, Enea Ceolini, Cengiz Öztireli, Markus H. Gross:
Gradient-Based Attribution Methods. 169-191 - Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller:
Layer-Wise Relevance Propagation: An Overview. 193-209 - Leila Arras, Jose A. Arjona-Medina, Michael Widrich, Grégoire Montavon, Michael Gillhofer, Klaus-Robert Müller, Sepp Hochreiter, Wojciech Samek:
Explaining and Interpreting LSTMs. 211-238
Part IV Evaluating Interpretability and Explanations
- Bolei Zhou, David Bau, Aude Oliva, Antonio Torralba:
Comparing the Interpretability of Deep Networks via Network Dissection. 243-252 - Grégoire Montavon:
Gradient-Based Vs. Propagation-Based Explanations: An Axiomatic Comparison. 253-265 - Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim:
The (Un)reliability of Saliency Methods. 267-280
Part V Applications of Explainable AI
- Markus Hofmarcher, Thomas Unterthiner, Jose A. Arjona-Medina, Günter Klambauer, Sepp Hochreiter, Bernhard Nessler:
Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation. 285-296 - Christopher J. Anders, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller:
Understanding Patch-Based Learning of Video Data by Explaining Predictions. 297-309 - Kristof T. Schütt, Michael Gastegger, Alexandre Tkatchenko, Klaus-Robert Müller:
Quantum-Chemical Insights from Interpretable Atomistic Neural Networks. 311-330 - Kristina Preuer, Günter Klambauer, Friedrich Rippmann, Sepp Hochreiter, Thomas Unterthiner:
Interpretable Deep Learning in Drug Discovery. 331-345 - Frederik Kratzert, Mathew Herrnegger, Daniel Klotz, Sepp Hochreiter, Günter Klambauer:
NeuralHydrology - Interpreting LSTMs in Hydrology. 347-362 - Pamela K. Douglas, Ariana E. Anderson:
Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI. 363-378 - Marcel A. J. van Gerven, Katja Seeliger, Umut Güçlü, Yagmur Güçlütürk:
Current Advances in Neural Decoding. 379-394
Part VI Software for Explainable AI
- Maximilian Alber:
Software and Application Patterns for Explanation Methods. 399-433
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.