TY - GEN
T1 - Sanity Checks for Saliency Methods Explaining Object Detectors
AU - Padmanabhan, Deepan Chakravarthi
AU - Plöger, Paul G.
AU - Arriaga, Octavio
AU - Valdenegro-Toro, Matias
N1 - 18 pages, 10 figures, 1st World Conference on eXplainable Artificial Intelligence camera ready
PY - 2023/6/4
Y1 - 2023/6/4
N2 - Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s work on evaluating saliency methods for classification models illustrate certain explanation methods fail the model and data randomization tests. However, on extending the tests for various state of the art object detectors we illustrate that the ability to explain a model is more dependent on the model itself than the explanation method. We perform sanity checks for object detection and define new qualitative criteria to evaluate the saliency explanations, both for object classification and bounding box decisions, using Guided Backpropagation, Integrated Gradients, and their Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0, trained on COCO. In addition, the sensitivity of the explanation method to model parameters and data labels varies class-wise motivating to perform the sanity checks for each class. We find that EfficientDet-D0 is the most interpretable method independent of the saliency method, which passes the sanity checks with little problems.
AB - Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo et al.'s work on evaluating saliency methods for classification models illustrate certain explanation methods fail the model and data randomization tests. However, on extending the tests for various state of the art object detectors we illustrate that the ability to explain a model is more dependent on the model itself than the explanation method. We perform sanity checks for object detection and define new qualitative criteria to evaluate the saliency explanations, both for object classification and bounding box decisions, using Guided Backpropagation, Integrated Gradients, and their Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0, trained on COCO. In addition, the sensitivity of the explanation method to model parameters and data labels varies class-wise motivating to perform the sanity checks for each class. We find that EfficientDet-D0 is the most interpretable method independent of the saliency method, which passes the sanity checks with little problems.
KW - explainable AI
KW - saliency methods
KW - Object Detection
M3 - Conference contribution
BT - Proceedings of the 1st World Conference on eXplainable Artificial Intelligence
PB - arXiv
T2 - 1st World Conference on eXplainable Artificial Intelligence
Y2 - 26 July 2023 through 28 July 2023
ER -