Interpreting Interpretations: Organizing Attribution Methods by Criteria

CVPR 2020

Interpreting Interpretations: Organizing Attribution Methods by Criteria

Sep 29, 2020
|
39 views
|
Details
Authors: Zifan Wang, Piotr Mardziel, Anupam Datta, Matt Fredrikson Description: Motivated by distinct, though related, criteria, a growing number of attribution methods have been developed to interpret deep learning. While each relies on the interpretability of the concept of “importance“ and our ability to visualize patterns, explanations produced by the methods often differ. In this work we expand the foundations of human-understandable concepts with which attributions can be interpreted beyond ”importance” and its visualization; we incorporate the logical concepts of necessity and sufficiency, and the concept of proportionality. We define metrics to represent these concepts as quantitative aspects of an attribution. We evaluate our measures on a collection of methods explaining convolutional neural networks (CNN) for image classification. We conclude that some attribution methods are more appropriate for interpretation in terms of necessity while others are in terms of sufficiency, while no method is always the most appropriate in terms of both.

Comments
loading...