Abstract:In recent studies, the tensor ring (TR) rank has shown high effectiveness in tensor completion due to its ability of capturing the intrinsic structure within high-order tensors. A recently proposed TR rank minimization method is based on the convex relaxation by penalizing the weighted sum of nuclear norm of TR unfolding matrices. However, this method treats each singular value equally and neglects their physical meanings, which usually leads to suboptimal solutions in practice. In this paper, we propose to use the logdet-based function as a nonconvex smooth relaxation of the TR rank for tensor completion, which can more accurately approximate the TR rank and better promote the low-rankness of the solution. To solve the proposed nonconvex model efficiently, we develop an alternating direction method of multipliers algorithm and theoretically prove that, under some mild assumptions, our algorithm converges to a stationary point. Extensive experiments on color images, multispectral images, and color videos demonstrate that the proposed method outperforms several state-of-the-art competitors in both visual and quantitative comparison. Key words: nonconvex optimization, tensor ring rank, logdet function, tensor completion, alternating direction method of multipliers.
Abstract:The tensor train (TT) rank has received increasing attention in tensor completion due to its ability to capture the global correlation of high-order tensors ($\textrm{order} >3$). For third order visual data, direct TT rank minimization has not exploited the potential of TT rank for high-order tensors. The TT rank minimization accompany with \emph{ket augmentation}, which transforms a lower-order tensor (e.g., visual data) into a higher-order tensor, suffers from serious block-artifacts. To tackle this issue, we suggest the TT rank minimization with nonlocal self-similarity for tensor completion by simultaneously exploring the spatial, temporal/spectral, and nonlocal redundancy in visual data. More precisely, the TT rank minimization is performed on a formed higher-order tensor called group by stacking similar cubes, which naturally and fully takes advantage of the ability of TT rank for high-order tensors. Moreover, the perturbation analysis for the TT low-rankness of each group is established. We develop the alternating direction method of multipliers tailored for the specific structure to solve the proposed model. Extensive experiments demonstrate that the proposed method is superior to several existing state-of-the-art methods in terms of both qualitative and quantitative measures.
Abstract:As low-rank modeling has achieved great success in tensor recovery, many research efforts devote to defining the tensor rank. Among them, the recent popular tensor tubal rank, defined based on the tensor singular value decomposition (t-SVD), obtains promising results. However, the framework of the t-SVD and the tensor tubal rank are applicable only to three-way tensors and lack of flexibility to handle different correlations along different modes. To tackle these two issues, we define a new tensor unfolding operator, named mode-$k_1k_2$ tensor unfolding, as the process of lexicographically stacking the mode-$k_1k_2$ slices of an $N$-way tensor into a three-way tensor, which is a three-way extension of the well-known mode-$k$ tensor matricization. Based on it, we define a novel tensor rank, the tensor $N$-tubal rank, as a vector whose elements contain the tubal rank of all mode-$k_1k_2$ unfolding tensors, to depict the correlations along different modes. To efficiently minimize the proposed $N$-tubal rank, we establish its convex relaxation: the weighted sum of tensor nuclear norm (WSTNN). Then, we apply WSTNN to low-rank tensor completion (LRTC) and tensor robust principal component analysis (TRPCA). The corresponding WSTNN-based LRTC and TRPCA models are proposed, and two efficient alternating direction method of multipliers (ADMM)-based algorithms are developed to solve the proposed models. Numerical experiments demonstrate that the proposed models significantly outperform the compared ones.