Abstract
In AI and law, systems that are designed for decision support should be explainable when pursuing justice. In order for these systems to be fair and responsible, they should make correct decisions and make them using a sound and transparent rationale. In this paper, we introduce a knowledge-driven method for model-agnostic rationale evaluation using dedicated test cases, similar to unit-testing in professional software development. We apply this new quantitative human-in-the-loop method in a machine learning experiment aimed at extracting known knowledge structures from artificial datasets from a real-life legal setting. We show that our method allows us to analyze the rationale of black box machine learning systems by assessing which rationale elements are learned or not. Furthermore, we show that the rationale can be adjusted using tailor-made training data based on the results of the rationale evaluation.
Original language | English |
---|---|
Title of host publication | Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law |
Place of Publication | New York, NY, USA |
Publisher | Association for Computing Machinery |
Pages | 235–239 |
Number of pages | 5 |
ISBN (Print) | 9781450385268 |
DOIs | |
Publication status | Published - 21-Jun-2021 |
Event | 18th International Conference for Artificial Intelligence and Law - Sao Paulo, Brazil Duration: 21-Jun-2021 → 25-Jun-2021 https://icail.lawgorithm.com.br/ |
Conference
Conference | 18th International Conference for Artificial Intelligence and Law |
---|---|
Abbreviated title | ICAIL’21 |
Country/Territory | Brazil |
City | Sao Paulo |
Period | 21/06/2021 → 25/06/2021 |
Internet address |
Keywords
- explainable AI
- learning knowledge from data
- machine learning
- responsible AI