Abstract:Existing human recognition systems often rely on separate, specialized models for face and body analysis, limiting their effectiveness in real-world scenarios where pose, visibility, and context vary widely. This paper introduces SapiensID, a unified model that bridges this gap, achieving robust performance across diverse settings. SapiensID introduces (i) Retina Patch (RP), a dynamic patch generation scheme that adapts to subject scale and ensures consistent tokenization of regions of interest, (ii) a masked recognition model (MRM) that learns from variable token length, and (iii) Semantic Attention Head (SAH), an module that learns pose-invariant representations by pooling features around key body parts. To facilitate training, we introduce WebBody4M, a large-scale dataset capturing diverse poses and scale variations. Extensive experiments demonstrate that SapiensID achieves state-of-the-art results on various body ReID benchmarks, outperforming specialized models in both short-term and long-term scenarios while remaining competitive with dedicated face recognition systems. Furthermore, SapiensID establishes a strong baseline for the newly introduced challenge of Cross Pose-Scale ReID, demonstrating its ability to generalize to complex, real-world conditions.
Abstract:Gait recognition stands as one of the most pivotal remote identification technologies and progressively expands across research and industrial communities. However, existing gait recognition methods heavily rely on task-specific upstream driven by supervised learning to provide explicit gait representations, which inevitably introduce expensive annotation costs and potentially cause cumulative errors. Escaping from this trend, this work explores effective gait representations based on the all-purpose knowledge produced by task-agnostic Large Vision Models (LVMs) and proposes a simple yet efficient gait framework, termed BigGait. Specifically, the Gait Representation Extractor (GRE) in BigGait effectively transforms all-purpose knowledge into implicit gait features in an unsupervised manner, drawing from design principles of established gait representation construction approaches. Experimental results on CCPG, CAISA-B* and SUSTech1K indicate that BigGait significantly outperforms the previous methods in both self-domain and cross-domain tasks in most cases, and provides a more practical paradigm for learning the next-generation gait representation. Eventually, we delve into prospective challenges and promising directions in LVMs-based gait recognition, aiming to inspire future work in this emerging topic. The source code will be available at https://github.com/ShiqiYu/OpenGait.
Abstract:Gait pattern is a promising biometric for applications, as it can be captured from a distance without requiring individual cooperation. Nevertheless, existing gait datasets typically suffer from limited diversity, with indoor datasets requiring participants to walk along a fixed route in a restricted setting, and outdoor datasets containing only few walking sequences per subject. Prior generative methods have attempted to mitigate these limitations by building virtual gait datasets. They primarily focus on manipulating a single, specific gait attribute (e.g., viewpoint or carrying), and require the supervised data pairs for training, thus lacking the flexibility and diversity for practical usage. In contrast, our GaitEditer can act as an online module to edit a broad range of gait attributes, such as pants, viewpoint, and even age, in an unsupervised manner, which current gait generative methods struggle with. Additionally, GaitEidter also finely preserves both temporal continuity and identity characteristics in generated gait sequences. Experiments show that GaitEditer provides extensive knowledge for clothing-invariant and view-invariant gait representation learning under various challenging scenarios. The source code will be available.