Rationale Discovery and Explainable AI

Cor Steging*, Silja Renooij, Bart Verheij

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

7 Citations (Scopus)
83 Downloads (Pure)


The justification of an algorithm's outcomes is important in many domains, and in particular in the law. However, previous research has shown that machine learning systems can make the right decisions for the wrong reasons: despite high accuracies, not all of the conditions that define the domain of the training data are learned. In this study, we investigate what the system does learn, using state-of-the-art explainable AI techniques. With the use of SHAP and LIME, we are able to show which features impact the decision making process and how the impact changes with different distributions of the training data. However, our results also show that even high accuracy and good relevant feature detection are no guarantee for a sound rationale. Hence these state-of-the-art explainable AI techniques cannot be used to fully expose unsound rationales, further advocating the need for a separate method for rationale evaluation.

Original languageEnglish
Title of host publicationLegal Knowledge and Information Systems - JURIX 2021
Subtitle of host publicationThe 34th Annual Conference
EditorsErich Schweighofer
PublisherIOS Press
Number of pages10
ISBN (Electronic)9781643682525
Publication statusPublished - 2-Dec-2021
Event34th International Conference on Legal Knowledge and Information Systems, JURIX 2021 - Virtual, Online, Lithuania
Duration: 8-Dec-202110-Dec-2021

Publication series

NameFrontiers in Artificial Intelligence and Applications
ISSN (Print)0922-6389


Conference34th International Conference on Legal Knowledge and Information Systems, JURIX 2021
CityVirtual, Online


  • Data
  • Explainable AI
  • Knowledge
  • Machine Learning


Dive into the research topics of 'Rationale Discovery and Explainable AI'. Together they form a unique fingerprint.

Cite this