Multi-object tracking (MOT) in computer vision remains a significant challenge, requiring precise localization and continuous tracking of multiple objects in video sequences. This task is crucial for various applications, including action recognition and behavior analysis. Key challenges include occlusion, reidentification, tracking fast-moving objects, and handling camera motion artifacts. Past research has explored tracking-by-detection methods and end-to-end models, with recent attention on tracking-by-attention approaches leveraging transformer architectures. The emergence of data sets that emphasize robust reidentification, such as DanceTrack, has highlighted the need for effective solutions. While memory-based approaches have shown promise, they often suffer from high computational complexity and memory usage. We propose a novel sparse memory approach that selectively stores critical features based on object motion and overlapping awareness, aiming to enhance efficiency while minimizing redundancy. Building upon the MOTRv2 model, a hybrid of tracking-by-attention and tracking-by-detection, we introduce a training-free memory designed to bolster reidentification capabilities and preserve the model's flexibility. Our memory approach achieves significant improvements over MOTRv2 in the DanceTrack test set, demonstrating a gain of 1.1\% in HOTA metrics and 2.1\% in IDF1 score.