Talk given at the European Conference on Artificial Intelligence, ECAI 2020. The presented paper can be found at: http://ecai2020.eu/papers/958_paper.pdf
Abstract. Recent work has shown that some common machine learning classifiers can be compiled into Boolean circuits that have the same input-output behavior. We present a theory for unveiling the reasons behind the decisions made by Boolean classifiers and study some of its theoretical and practical implications. We define notions such as sufficient, necessary and complete reasons behind decisions, in addition to classifier and decision bias. We show how these notions can be used to evaluate counterfactual statements such as “a decision will stick even if . . . because . . . .” We present efficient algorithms for computing these notions, which are based on new advances on tractable Boolean circuits, and illustrate them using a case study.