PASCAL - Pattern Analysis, Statistical Modelling and Computational Learning

How to Explain Individual Classification Decisions
David Baehrens, Timon Schröter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen and Klaus-Robert Müller
Journal of Machine Learning Research Volume 11, pp. 1803-1831, 2010.

Abstract

After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.

EPrint Type:Article
Project Keyword:Project Keyword UNSPECIFIED
Subjects:Theory & Algorithms
ID Code:7898
Deposited By:Stefan Harmeling
Deposited On:17 March 2011