Abstract:We present GoalGrasp, a simple yet effective 6-DOF robot grasp pose detection method that does not rely on grasp pose annotations and grasp training. Our approach enables user-specified object grasping in partially occluded scenes. By combining 3D bounding boxes and simple human grasp priors, our method introduces a novel paradigm for robot grasp pose detection. First, we employ a 3D object detector named RCV, which requires no 3D annotations, to achieve rapid 3D detection in new scenes. Leveraging the 3D bounding box and human grasp priors, our method achieves dense grasp pose detection. The experimental evaluation involves 18 common objects categorized into 7 classes based on shape. Without grasp training, our method generates dense grasp poses for 1000 scenes. We compare our method's grasp poses to existing approaches using a novel stability metric, demonstrating significantly higher grasp pose stability. In user-specified robot grasping experiments, our approach achieves a 94% grasp success rate. Moreover, in user-specified grasping experiments under partial occlusion, the success rate reaches 92%.
Abstract:We propose SIR, an efficient method to decompose differentiable shadows for inverse rendering on indoor scenes using multi-view data, addressing the challenges in accurately decomposing the materials and lighting conditions. Unlike previous methods that struggle with shadow fidelity in complex lighting environments, our approach explicitly learns shadows for enhanced realism in material estimation under unknown light positions. Utilizing posed HDR images as input, SIR employs an SDF-based neural radiance field for comprehensive scene representation. Then, SIR integrates a shadow term with a three-stage material estimation approach to improve SVBRDF quality. Specifically, SIR is designed to learn a differentiable shadow, complemented by BRDF regularization, to optimize inverse rendering accuracy. Extensive experiments on both synthetic and real-world indoor scenes demonstrate the superior performance of SIR over existing methods in both quantitative metrics and qualitative analysis. The significant decomposing ability of SIR enables sophisticated editing capabilities like free-view relighting, object insertion, and material replacement. The code and data are available at https://xiaokangwei.github.io/SIR/.
Abstract:Heavily relying on 3D annotations limits the real-world application of 3D object detection. In this paper, we propose a method that does not demand any 3D annotation, while being able to predict full-oriented 3D bounding boxes. Our method, called Recursive Cross-View (RCV), transforms 3D detection into several 2D detection tasks, which only consume some 2D labels, based on the three-view principle. We propose a recursive paradigm, in which instance segmentation and 3D bounding box generation by Cross-View are implemented recursively until convergence. Specifically, a frustum is proposed via a 2D detector, followed by the recursive paradigm that finally outputs a full-oriented 3D box, class, and score. To justify that our method can be quickly used to new tasks in real-world scenarios, we do three experiments, namely indoor 3D human detection, full-oriented 3D hand detection, and real-time detection on a real 3D sensor. RCV achieves decent performance in these experiments. Once trained, our method can be viewed as a 3D annotation tool. Consequently, we formulate two 3D labeled dataset, namely '3D_HUMAN' and 'D_HAND', based on RCV, which could be used to pre-train other 3D detectors. Furthermore, estimated on the SUN RGB-D benchmark, our method achieves comparable performance with some full 3D supervised learning methods. RCV is the first 3D detection method that does not consume 3D labels and yields full-oriented 3D boxes on point clouds.