Abstract:Sparse representation is considered as a viable solution to visual tracking. In this paper, we propose a structured group local sparse tracker (SGLST), which exploits local patches inside target candidates in the particle filter framework. Unlike the conventional local sparse trackers, the proposed optimization model in SGLST not only adopts local and spatial information of the target candidates but also attains the spatial layout structure among them by employing a group-sparsity regularization term. To solve the optimization model, we propose an efficient numerical algorithm consisting of two subproblems with the closed-form solutions. Both qualitative and quantitative evaluations on the benchmarks of challenging image sequences demonstrate the superior performance of the proposed tracker against several state-of-the-art trackers.
Abstract:Sparse representation has recently been successfully applied in visual tracking. It utilizes a set of templates to represent target candidates and find the best one with the minimum reconstruction error as the tracking result. In this paper, we propose a robust deep features-based structured group local sparse tracker (DF-SGLST), which exploits the deep features of local patches inside target candidates and represents them by a set of templates in the particle filter framework. Unlike the conventional local sparse trackers, the proposed optimization model in DF-SGLST employs a group-sparsity regularization term to seamlessly adopt local and spatial information of the target candidates and attain the spatial layout structure among them. To solve the optimization model, we propose an efficient and fast numerical algorithm that consists of two subproblems with the closed-form solutions. Different evaluations in terms of success and precision on the benchmarks of challenging image sequences (e.g., OTB50 and OTB100) demonstrate the superior performance of the proposed tracker against several state-of-the-art trackers.
Abstract:Sparse representation is a viable solution to visual tracking. In this paper, we propose a structured multi-task multi-view tracking (SMTMVT) method, which exploits the sparse appearance model in the particle filter framework to track targets under different challenges. Specifically, we extract features of the target candidates from different views and sparsely represent them by a linear combination of templates of different views. Unlike the conventional sparse trackers, SMTMVT not only jointly considers the relationship between different tasks and different views but also retains the structures among different views in a robust multi-task multi-view formulation. We introduce a numerical algorithm based on the proximal gradient method to quickly and effectively find the sparsity by dividing the optimization problem into two subproblems with the closed-form solutions. Both qualitative and quantitative evaluations on the benchmark of challenging image sequences demonstrate the superior performance of the proposed tracker against various state-of-the-art trackers.