Abstract:In robot task planning, large language models (LLMs) have shown significant promise in generating complex and long-horizon action sequences. However, it is observed that LLMs often produce responses that sound plausible but are not accurate. To address these problems, existing methods typically employ predefined error sets or external knowledge sources, requiring human efforts and computation resources. Recently, self-correction approaches have emerged, where LLM generates and refines plans, identifying errors by itself. Despite their effectiveness, they are more prone to failures in correction due to insufficient reasoning. In this paper, we introduce InversePrompt, a novel self-corrective task planning approach that leverages inverse prompting to enhance interpretability. Our method incorporates reasoning steps to provide clear, interpretable feedback. It generates inverse actions corresponding to the initially generated actions and verifies whether these inverse actions can restore the system to its original state, explicitly validating the logical coherence of the generated plans. The results on benchmark datasets show an average 16.3% higher success rate over existing LLM-based task planning methods. Our approach offers clearer justifications for feedback in real-world environments, resulting in more successful task completion than existing self-correction approaches across various scenarios.
Abstract:N:M sparsity pruning is a powerful technique for compressing deep neural networks, utilizing NVIDIA's Sparse Tensor Core technology. This method benefits from hardware support for sparse indexing, enabling the adoption of fine-grained sparsity to maintain model accuracy while minimizing the overhead typically associated with irregular data access. Although restricted to a fixed level of sparsity due to its reliance on hardware, N:M sparsity can be combined with coarser sparsity techniques to achieve diverse compression ratios. Initially, column-wise vector sparsity is applied to a dense model, followed by row-wise N:M sparsity on the preserved column vectors. We call this multi-level approach as hierarchical N:M (HiNM) sparsity. Similar to earlier single-level sparsity techniques, HiNM sparsity necessitates an effective channel permutation strategy to maximize the accuracy of the compressed networks. However, it introduces further complexities by requiring the rearrangement of both input and output channels, addressing challenges such as permutation sequence, HiNM-sparsity-aware permutation, and maintaining consistency in channel ordering across layers. In this paper, we introduce a channel permutation method designed specifically for HiNM sparsity, named gyro-permutation. This method is crafted to exploit the unique characteristics of HiNM pruning, incorporating a strategic policy in each permutation phase, including channel sampling, clustering, and assignment, to circumvent local minima. Additionally, we have developed a GPU kernel that facilitates independent layer permutation during the execution of HiNM sparse networks. Our extensive experimental evaluations on various DNN models demonstrate that our gyro-permutation significantly enhances the accuracy of HiNM sparse networks, allowing them to reach performance levels comparable to those of unstructured sparse networks.
Abstract:With the recent proliferation of on-device AI, there is an increasing need to run computationally intensive DNNs directly on mobile devices. However, the limited computing and memory resources of these devices necessitate effective pruning techniques. Block-wise pruning is promising due to its low accuracy drop tradeoff for speedup gains, but it requires block positions to be aligned with block size, hindering optimal position selection to minimize model accuracy drop. Unaligned block pruning (UBP) addresses this by allowing blocks to be selected at arbitrary positions, yet its practical use is limited by a time-consuming optimal block selection algorithm and lack of efficient inference kernels. In this paper, we propose a pseudo-optimal yet fast block selection algorithm called Block Expansion and Division (BED), which can be integrated into an iterative model training process. Additionally, we introduce an efficient inference kernel implementation for mobile devices, enabling a UBP-based model to achieve similar latency to a DNN model compressed by aligned block pruning. We demonstrate the superiority of our techniques on a real mobile phone with MobileNet and ResNet models.