Abstract
Many of the successful modern machine learning approaches can be described as ``black box'' systems; these systems perform well, but are unable to explain the reasoning behind their decisions. The emerging sub-field of Explainable Artificial Intelligence (XAI) aims to create systems that are able to explain to their users why they made a particular decision. Using artificial datasets whose internal structure is known beforehand, this study shows that the reasoning of systems that perform well is not necessarily sound. Furthermore, when multiple combined conditions define a dataset, systems can preform well on the combined problem and not learn each of the individual conditions. Instead, it often learns a confounding structure within the data that allows it to make the correct decisions. With regards to the goal of creating explainable systems, however, unsound rationales could create irrational explanations which would be problematic for the XAI movement.
Original language | English |
---|---|
Publication status | Published - 2019 |
Event | BNAIC/Benelearn Conference - Brussels, Belgium Duration: 6-Nov-2019 → 8-Nov-2019 https://bnaic19.brussels/ |
Conference
Conference | BNAIC/Benelearn Conference |
---|---|
Country/Territory | Belgium |
City | Brussels |
Period | 06/11/2019 → 08/11/2019 |
Internet address |