Abstract
Grasp synthesis is one of the challenging tasks for any robot object manipulation task. In this paper, we present a new deep learning-based grasp synthesis approach for 3D objects. In particular, we propose an end-to-end 3D Convolutional Neural Network to predict the objects’ graspable areas. We named our approach Res-U-Net since the architecture of the network is designed based on U-Net structure and residual network-styled blocks. It devised to plan 6-DOF grasps for any desired object, be efficient to compute and use, and be robust against varying point cloud density and Gaussian noise. We have performed extensive experiments to assess the performance of the proposed approach concerning graspable part detection, grasp success rate, and robustness to varying point cloud density and Gaussian noise. Experiments validate the promising performance of the proposed architecture in all aspects. A video showing the performance of our approach in the simulation environment can be found at http://youtu.be/5_yAJCc8owo
Original language | English |
---|---|
Title of host publication | 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) |
Publisher | IEEE |
Pages | 781-787 |
Number of pages | 7 |
ISBN (Electronic) | 978-1-7281-6075-7 |
ISBN (Print) | 978-1-7281-6076-4 |
DOIs | |
Publication status | Published - Aug-2020 |
Keywords
- cs.RO