Transparent objects are widely used in life due to their beauty and simplicity, such as in kitchens, stores, and factories. Although transparent objects are very common, the perception and grasping of transparent objects in complex environments is a very difficult problem for robots, which is directly related to their own properties. First of all, transparent objects have less texture information, and most of the texture produced is caused by light refraction and reflection, which brings great difficulties to the detection of transparent objects. Secondly, the surface of transparent objects is smooth; even if there is a small deviation in the grasping position, it may lead to the failure of the grasping task. Therefore, how to solve the problem of transparent object grasping in complex scenes has become a very important and challenging topic in the field of robotics.
Recently, The Smart Sensing and Robotics (SSR) research group led by Associate Professor Wenbo DING from the Tsinghua Shenzhen International Graduate School (Tsinghua SIGS) and coauthors have proposed a framework for transparent object grasping based on visual-tactile fusion. This framework mimics the human action of grasping objects in low-visibility situations and utilizes visual-tactile fusion for transparent object detection and grasping. The method not only has a very high grasping success rate but also enables object grasping in glass fragments, stacking, overlapping, undulating, sand, and underwater scenes.
Figure 1. Diagram of visual-tactile fusion grasping
Figure 2. Diagram of the Visual-Tactile Fusion Grasping Algorithm
Figure 3.Application Scenarios of Visual-Tactile Fusion Grasping Algorithm
The research article entitled “Visual-Tactile Fusion for Transparent Object Grasping in Complex Backgrounds” was recently published in the journal IEEE Transactions on Robotics.
The corresponding authors are Associate Professor Wenbo DING and Professor Houde LIU from the Institute of Data and Information at Tsinghua SIGS. The co-first authors are Shoujie LI and Haixin YU from the Institute of Data and Information. Authors also include Linqi YE from Shanghai University and Chongkun XIA, Xueqian WANG, and Xiaoping ZHANG from Tsinghua SIGS. This work is supported by the National Natural Science Foundation of China;Tsinghua Shenzhen International Graduate School-Shenzhen Pengrui Young Faculty Program of Shenzhen Pengrui Foundation; Guangdong Basic and Applied Basic Research Foundation; andthe Tsinghua University Guoqiang Institute.
Link to full article:
https://ieeexplore.ieee.org/document/10175024
Written by Shoujie Li
Edited by Alena Shish & Yuan Yang