Sanity Checks for Saliency Methods Explaining Object Detectors

Deepan Chakravarthi Padmanabhan, Paul G. Plöger, Octavio Arriaga, Matias Valdenegro-Toro

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

66 Downloads (Pure)

Abstract

Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s work on evaluating saliency methods for classification models illustrate certain explanation methods fail the model and data randomization tests. However, on extending the tests for various state of the art object detectors we illustrate that the ability to explain a model is more dependent on the model itself than the explanation method. We perform sanity checks for object detection and define new qualitative criteria to evaluate the saliency explanations, both for object classification and bounding box decisions, using Guided Backpropagation, Integrated Gradients, and their Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0, trained on COCO. In addition, the sensitivity of the explanation method to model parameters and data labels varies class-wise motivating to perform the sanity checks for each class. We find that EfficientDet-D0 is the most interpretable method independent of the saliency method, which passes the sanity checks with little problems.
Original languageEnglish
Title of host publicationProceedings of the 1st World Conference on eXplainable Artificial Intelligence
PublisherarXiv
Number of pages19
Publication statusSubmitted - 4-Jun-2023
Event1st World Conference on eXplainable Artificial Intelligence - Lisbon, Lisbon, Portugal
Duration: 26-Jul-202328-Jul-2023
https://xaiworldconference.com/

Conference

Conference1st World Conference on eXplainable Artificial Intelligence
Country/TerritoryPortugal
CityLisbon
Period26/07/202328/07/2023
Internet address

Keywords

  • explainable AI
  • saliency methods
  • Object Detection

Fingerprint

Dive into the research topics of 'Sanity Checks for Saliency Methods Explaining Object Detectors'. Together they form a unique fingerprint.
  • Sanity Checks for Saliency Methods Explaining Object Detectors

    Padmanabhan, D. C., Plöger, P. G., Arriaga, O. & Valdenegro-Toro, M., 29-Oct-2023, Explainable Artificial Intelligence: First World Conference, xAI 2023, Lisbon, Portugal, July 26–28, 2023, Proceedings, Part I. Longo, L. (ed.). Springer, p. 438-455 18 p. (Communications in Computer and Information Science; vol. 1901 CCIS).

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Open Access
    File
    120 Downloads (Pure)

Cite this