Abstract
Robotic systems often face challenges when attempting to grasp a target object due to interference from surrounding items. We propose a Deep Reinforcement Learning (DRL) method that develops joint policies for grasping and pushing, enabling effective manipulation of target objects within untrained, densely cluttered environments. In particular, a dual RL model is introduced, which presents high resilience in handling complicated scenes, reaching an average of 98% task completion in simulation and real-world scenes. To evaluate the proposed method, we conduct comprehensive simulation experiments in three distinct environments: densely packed building blocks, randomly positioned building blocks, and common household objects. Further, real-world tests are conducted using actual robots to confirm the robustness of our approach in various untrained and highly cluttered environments. The results from experiments underscore the superior efficacy of our method in both simulated and real-world scenarios, outperforming recent state-of-the-art methods. To ensure reproducibility and further the academic discourse, we make available a demonstration video, the trained models, and the source code for public access. https://sites.google.com/view/pushandgrasp/home.
Original language | English |
---|---|
Title of host publication | 2024 IEEE International Conference on Robotics and Automation (ICRA) |
Publisher | IEEE |
Pages | 13840-13847 |
Number of pages | 8 |
ISBN (Print) | 979-8-3503-8458-1 |
DOIs | |
Publication status | Published - 17-May-2024 |
Event | 2024 IEEE International Conference on Robotics and Automation (ICRA) - Yokohama, Japan Duration: 13-May-2024 → 17-May-2024 |
Conference
Conference | 2024 IEEE International Conference on Robotics and Automation (ICRA) |
---|---|
Period | 13/05/2024 → 17/05/2024 |
Keywords
- Source coding
- Grasping
- Self-supervised learning
- Interference
- Robustness
- Reproducibility of results
- Task analysis