TY - GEN
T1 - Online Incremental Learning with Abstract Argumentation Frameworks
AU - Ayoobi, H.
AU - Cao, M.
AU - Verbrugge, R.
AU - Verheij, B.
N1 - Funding Information:
⋆This work is conducted at DSSC and sponsored by a Marie Skłodowska-Curie COFUND grant, agreement no. 754315. ∗Corresponding author. Envelope-Open h.ayoobi@imperial.ac.uk ?H. Ayoobi); m.cao@rug.nl ?M. Cao); l.c.verbrugge@rug.nl ?R. Verbrugge); bart.verheij@rug.nl ?B. Verheij) GLOBE https://www.rug.nl/staff/h.ayoobi/research ?H. Ayoobi); https://www.rug.nl/staff/m.cao/ ?M. Cao); https://rinekeverbrugge.nl/ ?R. Verbrugge); https://www.ai.rug.nl/~verheij/ ?B. Verheij) Orcid 0000-0002-5418-6352 ?H. Ayoobi); 0000-0001-5472-562X ?M. Cao)
Publisher Copyright:
© 2022 Copyright for this paper by its authors.
PY - 2022
Y1 - 2022
N2 - The environment around general-purpose service robots has a dynamic nature. Accordingly, even the robot's programmer cannot predict all the possible external failures which the robot may confront. This research proposes an online incremental learning method that can be further used to autonomously handle external failures originating from a change in the environment. Existing research typically offers special-purpose solutions. Furthermore, the current incremental online learning algorithms can not generalize well with just a few observations. In contrast, our method extracts a set of hypotheses, which can then be used for finding the best recovery behavior at each failure state. The proposed argumentation-based online incremental learning approach uses an abstract and bipolar argumentation framework to extract the most relevant hypotheses and model the defeasibility relation between them. This leads to a novel online incremental learning approach that overcomes the addressed problems and can be used in different domains including robotic applications. We have compared our proposed approach with state-of-the-art online incremental learning approaches and an approximation-based reinforcement learning method. The experimental results show that our approach learns more quickly with a lower number of observations and also has higher final precision than the other methods.
AB - The environment around general-purpose service robots has a dynamic nature. Accordingly, even the robot's programmer cannot predict all the possible external failures which the robot may confront. This research proposes an online incremental learning method that can be further used to autonomously handle external failures originating from a change in the environment. Existing research typically offers special-purpose solutions. Furthermore, the current incremental online learning algorithms can not generalize well with just a few observations. In contrast, our method extracts a set of hypotheses, which can then be used for finding the best recovery behavior at each failure state. The proposed argumentation-based online incremental learning approach uses an abstract and bipolar argumentation framework to extract the most relevant hypotheses and model the defeasibility relation between them. This leads to a novel online incremental learning approach that overcomes the addressed problems and can be used in different domains including robotic applications. We have compared our proposed approach with state-of-the-art online incremental learning approaches and an approximation-based reinforcement learning method. The experimental results show that our approach learns more quickly with a lower number of observations and also has higher final precision than the other methods.
KW - Argumentation Theory
KW - Argumentation-Based Learning
KW - General Purpose Service Robots
KW - Online Incremental Learning
UR - http://www.scopus.com/inward/record.url?scp=85138379577&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85138379577
T3 - CEUR Workshop Proceedings
SP - 65
EP - 80
BT - 1st Workshop on Argumentation and Machine Learning, ArgML 2022
PB - CEUR-WS.org
T2 - 1st Workshop on Argumentation and Machine Learning, ArgML 2022
Y2 - 13 September 2022 through 13 September 2022
ER -