Discovering the Rationale of Decisions: Towards a Method for Aligning Learning and Reasoning

Cor Steging, Silja Renooij, Bart Verheij

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

2 Citations (Scopus)
12 Downloads (Pure)

Abstract

In AI and law, systems that are designed for decision support should be explainable when pursuing justice. In order for these systems to be fair and responsible, they should make correct decisions and make them using a sound and transparent rationale. In this paper, we introduce a knowledge-driven method for model-agnostic rationale evaluation using dedicated test cases, similar to unit-testing in professional software development. We apply this new quantitative human-in-the-loop method in a machine learning experiment aimed at extracting known knowledge structures from artificial datasets from a real-life legal setting. We show that our method allows us to analyze the rationale of black box machine learning systems by assessing which rationale elements are learned or not. Furthermore, we show that the rationale can be adjusted using tailor-made training data based on the results of the rationale evaluation.
Original languageEnglish
Title of host publicationProceedings of the Eighteenth International Conference on Artificial Intelligence and Law
Place of PublicationNew York, NY, USA
PublisherAssociation for Computing Machinery
Pages235–239
Number of pages5
ISBN (Print)9781450385268
DOIs
Publication statusPublished - 21-Jun-2021
Event18th International Conference for Artificial Intelligence and Law - Sao Paulo, Brazil
Duration: 21-Jun-202125-Jun-2021
https://icail.lawgorithm.com.br/

Conference

Conference18th International Conference for Artificial Intelligence and Law
Abbreviated titleICAIL’21
Country/TerritoryBrazil
CitySao Paulo
Period21/06/202125/06/2021
Internet address

Keywords

  • explainable AI
  • learning knowledge from data
  • machine learning
  • responsible AI

Cite this