Abstract:Visual Tracking is a complex problem due to unconstrained appearance variations and dynamic environment. Extraction of complementary information from the object environment via multiple features and adaption to target's appearance variations are the key problems of this work. To this end, we propose a robust object tracking framework based on Unified Graph Fusion (UGF) of multi-cue to adapt to the object's appearance. The proposed cross-diffusion of sparse and dense features not only suppresses the individual feature deficiencies but also extracts the complementary information from multi-cue. This iterative process builds robust unified features which are invariant to object deformations, fast motion and occlusion. Robustness of the unified feature also enables the random forest classifier to precisely distinguish the foreground from the background, adding resilience to background clutter. In addition, we present a novel kernel-based adaptation strategy using outlier detection and a transductive reliability metric. The adaptation strategy updates the appearance model to accommodate variations in scale, illumination, rotation. Both qualitative and quantitative analysis of 25 benchmark video sequences (OTB-50, OTB-100 and VOT2017/18) shows that the proposed UGF tracker performs favorably against 15 other state-of-the-art trackers under various object tracking challenges.
Abstract:Particle Filter(PF) is used extensively for estimation of target Non-linear and Non-gaussian state. However, its performance suffers due to inherent problem of sample degeneracy and impoverishment. In order to address this, we propose a novel resampling method based upon Crow Search Optimization to overcome low performing particles detected as outlier. Proposed outlier detection mechanism with transductive reliability achieve faster convergence of proposed PF tracking framework. In addition, we present an adaptive fuzzy fusion model to integrate multi-cue extracted for each evaluated particle. Automatic boosting and suppression of particles using proposed fusion model not only enhances performance of resampling method but also achieve optimal state estimation. Performance of the proposed tracker is evaluated over 12 benchmark video sequences and compared with state-of-the-art solutions. Qualitative and quantitative results reveals that the proposed tracker not only outperforms existing solutions but also efficiently handle various tracking challenges. On average of outcome, we achieve CLE of 7.98 and F-measure of 0.734.