Picture for Guohao Dai

Guohao Dai

Efficient and Effective Retrieval of Dense-Sparse Hybrid Vectors using Graph-based Approximate Nearest Neighbor Search

Add code
Oct 27, 2024
Viaarxiv icon

Large Language Model Inference Acceleration: A Comprehensive Hardware Perspective

Add code
Oct 06, 2024
Viaarxiv icon

Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding

Add code
Oct 02, 2024
Figure 1 for Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding
Figure 2 for Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding
Figure 3 for Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding
Figure 4 for Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding
Viaarxiv icon

CSKV: Training-Efficient Channel Shrinking for KV Cache in Long-Context Scenarios

Add code
Sep 16, 2024
Viaarxiv icon

MARCA: Mamba Accelerator with ReConfigurable Architecture

Add code
Sep 16, 2024
Figure 1 for MARCA: Mamba Accelerator with ReConfigurable Architecture
Figure 2 for MARCA: Mamba Accelerator with ReConfigurable Architecture
Figure 3 for MARCA: Mamba Accelerator with ReConfigurable Architecture
Figure 4 for MARCA: Mamba Accelerator with ReConfigurable Architecture
Viaarxiv icon

Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs

Add code
Jul 01, 2024
Viaarxiv icon

MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression

Add code
Jun 21, 2024
Viaarxiv icon

Can LLMs Learn by Teaching? A Preliminary Study

Add code
Jun 20, 2024
Viaarxiv icon

DiTFastAttn: Attention Compression for Diffusion Transformer Models

Add code
Jun 12, 2024
Viaarxiv icon

ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation

Add code
Jun 04, 2024
Viaarxiv icon