Abstract:Tensor ring (TR) decomposition has recently received increased attention due to its superior expressive performance for high-order tensors. However, the applicability of traditional TR decomposition algorithms to real-world applications is hindered by prevalent large data sizes, missing entries, and corruption with outliers. In this work, we propose a scalable and robust TR decomposition algorithm capable of handling large-scale tensor data with missing entries and gross corruptions. We first develop a novel auto-weighted steepest descent method that can adaptively fill the missing entries and identify the outliers during the decomposition process. Further, taking advantage of the tensor ring model, we develop a novel fast Gram matrix computation (FGMC) approach and a randomized subtensor sketching (RStS) strategy which yield significant reduction in storage and computational complexity. Experimental results demonstrate that the proposed method outperforms existing TR decomposition methods in the presence of outliers, and runs significantly faster than existing robust tensor completion algorithms.
Abstract:Tensor completion is the problem of estimating the missing values of high-order data from partially observed entries. Among several definitions of tensor rank, tensor ring rank affords the flexibility and accuracy needed to model tensors of different orders, which motivated recent efforts on tensor-ring completion. However, data corruption due to prevailing outliers poses major challenges to existing algorithms. In this paper, we develop a robust approach to tensor ring completion that uses an M-estimator as its error statistic, which can significantly alleviate the effect of outliers. Leveraging a half-quadratic (HQ) method, we reformulate the problem as one of weighted tensor completion. We present two HQ-based algorithms based on truncated singular value decomposition and matrix factorization along with their convergence and complexity analysis. Extendibility of the proposed approach to alternative definitions of tensor rank is also discussed. The experimental results demonstrate the superior performance of the proposed approach over state-of-the-art robust algorithms for tensor completion.
Abstract:Tensor completion is the problem of estimating the missing entries of a partially observed tensor with a certain low-rank structure. It improves on matrix completion for image and video data by capturing additional structural information intrinsic to such data. % With more inherent information involving in tensor structure than matrix, tensor completion has shown better performance compared with matrix completion especially in image and video data. Traditional completion algorithms treat the entire visual data as a tensor, which may not always work well especially when camera or object motion exists. In this paper, we develop a novel non-local patch-based tensor ring completion algorithm. In the proposed approach, similar patches are extracted for each reference patch along both the spatial and temporal domains of the visual data. The collected patches are then formed into a high-order tensor and a tensor ring completion algorithm is proposed to recover the completed tensor. A novel interval sampling-based block matching (ISBM) strategy and a hybrid completion strategy are also proposed to improve efficiency and accuracy. Further, we develop an online patch-based completion algorithm to deal with streaming video data. An efficient online tensor ring completion algorithm is proposed to reduce the time cost. Extensive experimental results demonstrate the superior performance of the proposed algorithms compared with state-of-the-art methods.
Abstract:The goal of tensor completion is to recover a tensor from a subset of its entries, often by exploiting its low-rank property. Among several useful definitions of tensor rank, the low-tubal-rank was shown to give a valuable characterization of the inherent low-rank structure of a tensor. While some low-tubal-rank tensor completion algorithms with favorable performance have been recently proposed, these algorithms utilize second-order statistics to measure the error residual, which may not work well when the observed entries contain large outliers. In this paper, we propose a new objective function for low-tubal-rank tensor completion, which uses correntropy as the error measure to mitigate the effect of the outliers. To efficiently optimize the proposed objective, we leverage a half-quadratic minimization technique whereby the optimization is transformed to a weighted low-tubal-rank tensor factorization problem. Subsequently, we propose two simple and efficient algorithms to obtain the solution and provide their convergence and complexity analysis. Numerical results using both synthetic and real data demonstrate the robust and superior performance of the proposed algorithms.