Interactive Open-Ended Object, Affordance and Grasp Learning for Robotic Manipulation

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

4 Citations (Scopus)

Abstract

Service robots are expected to autonomously and efficiently work in human-centric environments. For this type of robots, object perception and manipulation are challenging tasks due to need for accurate and real-time response. This paper presents an interactive open-ended learning approach to recognize multiple objects and their grasp affordances concurrently. This is an important contribution in the field of service robots since no matter how extensive the training data used for batch learning, a robot might always be confronted with an unknown object when operating in human-centric environments. The paper describes the system architecture and the learning and recognition capabilities. Grasp learning associates grasp configurations (i.e., end-effector positions and orientations) to grasp affordance categories. The grasp affordance category and the grasp configuration are taught through verbal and kinesthetic teaching, respectively. A Bayesian approach is adopted for learning and recognition of object categories and an instance-based approach is used for learning and recognition of affordance categories. An extensive set of experiments has been performed to assess the performance of the proposed approach regarding recognition accuracy, scalability and grasp success
rate on challenging datasets and real-world scenarios.
Original languageEnglish
Title of host publicationIEEE/RSJ International Conference on Robotics and Automation (ICRA)
PublisherIEEE
Publication statusPublished - May-2019
EventICRA 2019- IEEE International Conference on Robotics and Automation - Montreal Convention Center, Montreal, Canada
Duration: 20-May-201924-May-2019

Conference

ConferenceICRA 2019- IEEE International Conference on Robotics and Automation
Country/TerritoryCanada
CityMontreal
Period20/05/201924/05/2019

Cite this