Abstract:Associating driver attention with driving scene across two fields of views (FOVs) is a hard cross-domain perception problem, which requires comprehensive consideration of cross-view mapping, dynamic driving scene analysis, and driver status tracking. Previous methods typically focus on a single view or map attention to the scene via estimated gaze, failing to exploit the implicit connection between them. Moreover, simple fusion modules are insufficient for modeling the complex relationships between the two views, making information integration challenging. To address these issues, we propose a novel method for end-to-end scene-associated driver attention estimation, called EraW-Net. This method enhances the most discriminative dynamic cues, refines feature representations, and facilitates semantically aligned cross-domain integration through a W-shaped architecture, termed W-Net. Specifically, a Dynamic Adaptive Filter Module (DAF-Module) is proposed to address the challenges of frequently changing driving environments by extracting vital regions. It suppresses the indiscriminately recorded dynamics and highlights crucial ones by innovative joint frequency-spatial analysis, enhancing the model's ability to parse complex dynamics. Additionally, to track driver states during non-fixed facial poses, we propose a Global Context Sharing Module (GCS-Module) to construct refined feature representations by capturing hierarchical features that adapt to various scales of head and eye movements. Finally, W-Net achieves systematic cross-view information integration through its "Encoding-Independent Partial Decoding-Fusion Decoding" structure, addressing semantic misalignment in heterogeneous data integration. Experiments demonstrate that the proposed method robustly and accurately estimates the mapping of driver attention in scene on large public datasets.
Abstract:Manipulating objects without grasping them enables more complex tasks, known as non-prehensile manipulation. Most previous methods only learn one manipulation skill, such as reach or push, and cannot achieve flexible object manipulation.In this work, we introduce MRLM, a Multi-stage Reinforcement Learning approach for non-prehensile Manipulation of objects.MRLM divides the task into multiple stages according to the switching of object poses and contact points.At each stage, the policy takes the point cloud-based state-goal fusion representation as input, and proposes a spatially-continuous action that including the motion of the parallel gripper pose and opening width.To fully unlock the potential of MRLM, we propose a set of technical contributions including the state-goal fusion representation, spatially-reachable distance metric, and automatic buffer compaction.We evaluate MRLM on an Occluded Grasping task which aims to grasp the object in configurations that are initially occluded.Compared with the baselines, the proposed technical contributions improve the success rate by at least 40\% and maximum 100\%, and avoids falling into local optimum.Our method demonstrates strong generalization to unseen object with shapes outside the training distribution.Moreover, MRLM can be transferred to real world with zero-shot transfer, achieving a 95\% success rate.Code and videos can be found at https://sites.google.com/view/mrlm.
Abstract:Grasp detection in cluttered scenes is a very challenging task for robots. Generating synthetic grasping data is a popular way to train and test grasp methods, as is Dex-net and GraspNet; yet, these methods generate training grasps on 3D synthetic object models, but evaluate at images or point clouds with different distributions, which reduces performance on real scenes due to sparse grasp labels and covariate shift. To solve existing problems, we propose a novel on-policy grasp detection method, which can train and test on the same distribution with dense pixel-level grasp labels generated on RGB-D images. A Parallel-Depth Grasp Generation (PDG-Generation) method is proposed to generate a parallel depth image through a new imaging model of projecting points in parallel; then this method generates multiple candidate grasps for each pixel and obtains robust grasps through flatness detection, force-closure metric and collision detection. Then, a large comprehensive Pixel-Level Grasp Pose Dataset (PLGP-Dataset) is constructed and released; distinguished with previous datasets with off-policy data and sparse grasp samples, this dataset is the first pixel-level grasp dataset, with the on-policy distribution where grasps are generated based on depth images. Lastly, we build and test a series of pixel-level grasp detection networks with a data augmentation process for imbalance training, which learn grasp poses in a decoupled manner on the input RGB-D images. Extensive experiments show that our on-policy grasp method can largely overcome the gap between simulation and reality, and achieves the state-of-the-art performance. Code and data are provided at https://github.com/liuchunsense/PLGP-Dataset.
Abstract:Video captioning has been a challenging and significant task that describes the content of a video clip in a single sentence. The model of video captioning is usually an encoder-decoder. We find that the normalization of extracted video features can improve the final performance of video captioning. Encoder-decoder model is usually trained using teacher-enforced strategies to make the prediction probability of each word close to a 0-1 distribution and ignore other words. In this paper, we present a novel architecture which introduces a guidance module to encourage the encoder-decoder model to generate words related to the past and future words in a caption. Based on the normalization and guidance module, guidance module net (GMNet) is built. Experimental results on commonly used dataset MSVD show that proposed GMNet can improve the performance of the encoder-decoder model on video captioning tasks.
Abstract:In this paper, a new definition of tensor p-shrinkage nuclear norm (p-TNN) is proposed based on tensor singular value decomposition (t-SVD). In particular, it can be proved that p-TNN is a better approximation of the tensor average rank than the tensor nuclear norm when p < 1. Therefore, by employing the p-shrinkage nuclear norm, a novel low-rank tensor completion (LRTC) model is proposed to estimate a tensor from its partial observations. Statistically, the upper bound of recovery error is provided for the LRTC model. Furthermore, an efficient algorithm, accelerated by the adaptive momentum scheme, is developed to solve the resulting nonconvex optimization problem. It can be further guaranteed that the algorithm enjoys a global convergence rate under the smoothness assumption. Numerical experiments conducted on both synthetic and real-world data sets verify our results and demonstrate the superiority of our p-TNN in LRTC problems over several state-of-the-art methods.
Abstract:Many practical problems involve the recovery of a binary matrix from partial information, which makes the binary matrix completion (BMC) technique received increasing attention in machine learning. In particular, we consider a special case of BMC problem, in which only a subset of positive elements can be observed. In recent years, convex regularization based methods are the mainstream approaches for this task. However, the applications of nonconvex surrogates in standard matrix completion have demonstrated better empirical performance. Accordingly, we propose a novel BMC model with nonconvex regularizers and provide the recovery guarantee for the model. Furthermore, for solving the resultant nonconvex optimization problem, we improve the popular proximal algorithm with acceleration strategies. It can be guaranteed that the convergence rate of the algorithm is in the order of ${1/T}$, where $T$ is the number of iterations. Extensive experiments conducted on both synthetic and real-world data sets demonstrate the superiority of the proposed approach over other competing methods.