Abstract:We propose DLTPose, a novel method for 6DoF object pose estimation from RGB-D images that combines the accuracy of sparse keypoint methods with the robustness of dense pixel-wise predictions. DLTPose predicts per-pixel radial distances to a set of minimally four keypoints, which are then fed into our novel Direct Linear Transform (DLT) formulation to produce accurate 3D object frame surface estimates, leading to better 6DoF pose estimation. Additionally, we introduce a novel symmetry-aware keypoint ordering approach, designed to handle object symmetries that otherwise cause inconsistencies in keypoint assignments. Previous keypoint-based methods relied on fixed keypoint orderings, which failed to account for the multiple valid configurations exhibited by symmetric objects, which our ordering approach exploits to enhance the model's ability to learn stable keypoint representations. Extensive experiments on the benchmark LINEMOD, Occlusion LINEMOD and YCB-Video datasets show that DLTPose outperforms existing methods, especially for symmetric and occluded objects, demonstrating superior Mean Average Recall values of 86.5% (LM), 79.7% (LM-O) and 89.5% (YCB-V). The code is available at https://anonymous.4open.science/r/DLTPose_/ .
Abstract:In recent years, the throughput requirements of e-commerce fulfillment warehouses have seen a steep increase. This has resulted in various automation solutions being developed for item picking and movement. In this paper, we address the problem of manipulators picking heterogeneous items placed randomly in a bin. Traditional solutions require that the items be picked to be placed in an orderly manner in the bin and that the exact dimensions of the items be known beforehand. Such solutions do not perform well in the real world since the items in a bin are seldom placed in an orderly manner and new products are added almost every day by e-commerce suppliers. We propose a cost-effective solution that handles both the aforementioned challenges. Our solution comprises of a dual sensor system comprising of a regular RGB camera and a 3D ToF depth sensor. We propose a novel algorithm that fuses data from both these sensors to improve object segmentation while maintaining the accuracy of pose estimation, especially in occluded environments and tightly packed bins. We experimentally verify the performance of our system by picking boxes using an ABB IRB 1200 robot. We also show that our system maintains a high level of accuracy in pose estimation that is independent of the dimensions of the box, texture, occlusion or orientation. We further show that our system is computationally less expensive and maintains a consistent detection time of 1 second. We also discuss how this approach can be easily extended to objects of all shapes.