Visualization and knowledge discovery from interpretable models

Sreejita Ghosh*, Peter Tiño, Kerstin Bunte

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

4 Citations (Scopus)
121 Downloads (Pure)

Abstract

Increasing number of sectors which affect human lives, are using Machine Learning (ML) tools. Hence the need for understanding their working mechanism and evaluating their fairness in decision-making, are becoming paramount, ushering in the era of Explainable AI (XAI). So, in this contribution we introduced a few intrinsically interpretable models which are also capable of dealing with missing values, in addition to extracting knowledge from the dataset and about the problem, and visualisation of the classifier and decision boundaries: angle based variants of Learning Vector Quantization. The performance of the developed classifiers were comparable to those reported in literature for UCI’s heart disease dataset treated as a binary class problem. The newly developed classifiers also helped investigating the complexities of this dataset as a multiclass problem
Original languageEnglish
Title of host publicationInternational Joint Conference on Neural Networks (IJCNN)
Place of PublicationGlasgow, United Kingdom
PublisherIEEE (The Institute of Electrical and Electronics Engineers)
Pages1-8
Number of pages8
ISBN (Print)978-1-7281-6926-2
DOIs
Publication statusPublished - 1-Jul-2020
Event 2020 International Joint Conference on Neural Networks (IJCNN) - Glasgow, United Kingdom
Duration: 19-Jul-202024-Jul-2020

Conference

Conference 2020 International Joint Conference on Neural Networks (IJCNN)
Country/TerritoryUnited Kingdom
CityGlasgow
Period19/07/202024/07/2020

Fingerprint

Dive into the research topics of 'Visualization and knowledge discovery from interpretable models'. Together they form a unique fingerprint.

Cite this