Abstract:A significant challenge facing current optical flow and stereo methods is the difficulty in generalizing them well to the real world. This is mainly due to the high costs required to produce datasets, and the limitations of existing self-supervised methods on fuzzy results and complex model training problems. To address the above challenges, we propose a unified self-supervised generalization framework for optical flow and stereo tasks: Self-Assessed Generation (SAG). Unlike previous self-supervised methods, SAG is data-driven, using advanced reconstruction techniques to construct a reconstruction field from RGB images and generate datasets based on it. Afterward, we quantified the confidence level of the generated results from multiple perspectives, such as reconstruction field distribution, geometric consistency, and structural similarity, to eliminate inevitable defects in the generation process. We also designed a 3D flight foreground automatic rendering pipeline in SAG to encourage the network to learn occlusion and motion foreground. Experimentally, because SAG does not involve changes to methods or loss functions, it can directly self-supervised train the state-of-the-art deep networks, greatly improving the generalization performance of self-supervised methods on current mainstream optical flow and stereo-matching datasets. Compared to previous training modes, SAG is more generalized, cost-effective, and accurate.
Abstract:In this paper, we study the problem of estimating the 3D motion of dense pixels from continuous image pairs. Most previous methods are based on mature optical flow baselines and depth values, projecting the 2D motion on pixel planes into 3D space, and further optimizing the results by combining depth-motion-branch and other sub-modules. This stacked framework cannot leverage the complementarity between optical flow and other modules nor escape the dependence on accurate depth information. To address the above challenges, we propose a normalized scene flow framework, ScaleRAFT, based on cross-scale matching. Its core feature is directly matching objects between two frames in 3D scale space, i.e. matching features at the correct location and scale. Unlike previous methods, ScaleRAFT integrates optical flow and deep motion estimation into a unified architecture, allowing the optical flow pipeline and deep motion estimation to promote each other mutually. Moreover, ScaleRAFT estimates motion in the depth direction based on feature matching, breaking away from the dependence on accurate depth information. Experimentally, our method has achieved the best foreground performance so far in motion estimation tasks in driving scenarios, and has significantly improved various downstream 3D tasks.
Abstract:A significant challenge facing current optical flow methods is the difficulty in generalizing them well to the real world. This is mainly due to the high cost of hand-crafted datasets, and existing self-supervised methods are limited by indirect loss and occlusions, resulting in fuzzy outcomes. To address this challenge, we introduce a novel optical flow training framework: automatic data factory (ADF). ADF only requires RGB images as input to effectively train the optical flow network on the target data domain. Specifically, we use advanced Nerf technology to reconstruct scenes from photo groups collected by a monocular camera, and then calculate optical flow labels between camera pose pairs based on the rendering results. To eliminate erroneous labels caused by defects in the scene reconstructed by Nerf, we screened the generated labels from multiple aspects, such as optical flow matching accuracy, radiation field confidence, and depth consistency. The filtered labels can be directly used for network supervision. Experimentally, the generalization ability of ADF on KITTI surpasses existing self-supervised optical flow and monocular scene flow algorithms. In addition, ADF achieves impressive results in real-world zero-point generalization evaluations and surpasses most supervised methods.