Abstract:With the emergence of a single large model capable of successfully solving a multitude of tasks in NLP, there has been growing research interest in achieving similar goals in computer vision. On the one hand, most of these generic models, referred to as generalist vision models, aim at producing unified outputs serving different tasks. On the other hand, some existing models aim to combine different input types (aka data modalities), which are then processed by a single large model. Yet, this step of combination remains specialized, which falls short of serving the initial ambition. In this paper, we showcase that such specialization (during unification) is unnecessary, in the context of RGB-X video object tracking. Our single model tracker, termed XTrack, can remain blind to any modality X during inference time. Our tracker employs a mixture of modal experts comprising those dedicated to shared commonality and others capable of flexibly performing reasoning conditioned on input modality. Such a design ensures the unification of input modalities towards a common latent space, without weakening the modality-specific information representation. With this idea, our training process is extremely simple, integrating multi-label classification loss with a routing function, thereby effectively aligning and unifying all modalities together, even from only paired data. Thus, during inference, we can adopt any modality without relying on the inductive bias of the modal prior and achieve generalist performance. Without any bells and whistles, our generalist and blind tracker can achieve competitive performance compared to well-established modal-specific models on 5 benchmarks across 3 auxiliary modalities, covering commonly used depth, thermal, and event data.
Abstract:Hyperspectral imagery provides abundant spectral information beyond the visible RGB bands, offering rich discriminative details about objects in a scene. Leveraging such data has the potential to enhance visual tracking performance. In this paper, we propose a hyperspectral object tracker based on hybrid attention (HHTrack). The core of HHTrack is a hyperspectral hybrid attention (HHA) module that unifies feature extraction and fusion within one component through token interactions. A hyperspectral bands fusion (HBF) module is also introduced to selectively aggregate spatial and spectral signatures from the full hyperspectral input. Extensive experiments demonstrate the state-of-the-art performance of HHTrack on benchmark Near Infrared (NIR), Red Near Infrared (Red-NIR), and Visible (VIS) hyperspectral tracking datasets. Our work provides new insights into harnessing the strengths of transformers and hyperspectral fusion to advance robust object tracking.