Abstract:Recently, large language models (LLM) based on transformers are facing memory bottleneck issues due to KV cache, especially in long sequence handling. Previous researches proposed KV cache compression techniques that identify insignificant tokens based on Accumulative Attention Scores and removes their items from KV cache, noting that only few tokens play an important role in attention operations. However, we have observed that the existing Accumulative Attention Score is not suitable for the transformer decoder structure. In the decoder model, the number of times the Attention Score accumulates varies depending on the order of token appearance due to the effect of masking, causing an uneven comparison between tokens. To solve this, we propose Accumulative Attention Score with Forgetting Factor (A2SF) technique, which introduces a Forgetting Factor in the Attention Score accumulation process. A2SF applies a penalty to the past Attention Score generated from old tokens by repeatedly multiplying the Forgetting Factor to the Attention Score over time. Therefore, older tokens receive a larger penalty, providing fairness among different ages of tokens. Through the fair comparison among tokens, we can more effectively select important tokens. We have verified the accuracy improvement through A2SF in the OPT and LLaMA models and A2SF improves the accuracy of LLaMA 2 by up to 7.8% and 5.1% on 1-shot and 0-shot.
Abstract:N:M sparsity pruning is a powerful technique for compressing deep neural networks, utilizing NVIDIA's Sparse Tensor Core technology. This method benefits from hardware support for sparse indexing, enabling the adoption of fine-grained sparsity to maintain model accuracy while minimizing the overhead typically associated with irregular data access. Although restricted to a fixed level of sparsity due to its reliance on hardware, N:M sparsity can be combined with coarser sparsity techniques to achieve diverse compression ratios. Initially, column-wise vector sparsity is applied to a dense model, followed by row-wise N:M sparsity on the preserved column vectors. We call this multi-level approach as hierarchical N:M (HiNM) sparsity. Similar to earlier single-level sparsity techniques, HiNM sparsity necessitates an effective channel permutation strategy to maximize the accuracy of the compressed networks. However, it introduces further complexities by requiring the rearrangement of both input and output channels, addressing challenges such as permutation sequence, HiNM-sparsity-aware permutation, and maintaining consistency in channel ordering across layers. In this paper, we introduce a channel permutation method designed specifically for HiNM sparsity, named gyro-permutation. This method is crafted to exploit the unique characteristics of HiNM pruning, incorporating a strategic policy in each permutation phase, including channel sampling, clustering, and assignment, to circumvent local minima. Additionally, we have developed a GPU kernel that facilitates independent layer permutation during the execution of HiNM sparse networks. Our extensive experimental evaluations on various DNN models demonstrate that our gyro-permutation significantly enhances the accuracy of HiNM sparse networks, allowing them to reach performance levels comparable to those of unstructured sparse networks.
Abstract:Despite the rapid advancement of object detection algorithms, processing high-resolution images on embedded devices remains a significant challenge. Theoretically, the fully convolutional network architecture used in current real-time object detectors can handle all input resolutions. However, the substantial computational demands required to process high-resolution images render them impractical for real-time applications. To address this issue, real-time object detection models typically downsample the input image for inference, leading to a loss of detail and decreased accuracy. In response, we developed Octave-YOLO, designed to process high-resolution images in real-time within the constraints of embedded systems. We achieved this through the introduction of the cross frequency partial network (CFPNet), which divides the input feature map into low-resolution, low-frequency, and high-resolution, high-frequency sections. This configuration enables complex operations such as convolution bottlenecks and self-attention to be conducted exclusively on low-resolution feature maps while simultaneously preserving the details in high-resolution maps. Notably, this approach not only dramatically reduces the computational demands of convolution tasks but also allows for the integration of attention modules, which are typically challenging to implement in real-time applications, with minimal additional cost. Additionally, we have incorporated depthwise separable convolution into the core building blocks and downsampling layers to further decrease latency. Experimental results have shown that Octave-YOLO matches the performance of YOLOv8 while significantly reducing computational demands. For example, in 1080x1080 resolution, Octave-YOLO-N is 1.56 times faster than YOLOv8, achieving nearly the same accuracy on the COCO dataset with approximately 40 percent fewer parameters and FLOPs.
Abstract:With the recent proliferation of on-device AI, there is an increasing need to run computationally intensive DNNs directly on mobile devices. However, the limited computing and memory resources of these devices necessitate effective pruning techniques. Block-wise pruning is promising due to its low accuracy drop tradeoff for speedup gains, but it requires block positions to be aligned with block size, hindering optimal position selection to minimize model accuracy drop. Unaligned block pruning (UBP) addresses this by allowing blocks to be selected at arbitrary positions, yet its practical use is limited by a time-consuming optimal block selection algorithm and lack of efficient inference kernels. In this paper, we propose a pseudo-optimal yet fast block selection algorithm called Block Expansion and Division (BED), which can be integrated into an iterative model training process. Additionally, we introduce an efficient inference kernel implementation for mobile devices, enabling a UBP-based model to achieve similar latency to a DNN model compressed by aligned block pruning. We demonstrate the superiority of our techniques on a real mobile phone with MobileNet and ResNet models.