Picture for Yuzhang Shang

Yuzhang Shang

PTQ1.61: Push the Real Limit of Extremely Low-Bit Post-Training Quantization Methods for Large Language Models

Add code
Feb 18, 2025
Viaarxiv icon

Benchmarking Post-Training Quantization in LLMs: Comprehensive Taxonomy, Unified Evaluation, and Comparative Analysis

Add code
Feb 18, 2025
Viaarxiv icon

GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning

Add code
Feb 18, 2025
Viaarxiv icon

DLFR-VAE: Dynamic Latent Frame Rate VAE for Video Generation

Add code
Feb 17, 2025
Viaarxiv icon

E-CAR: Efficient Continuous Autoregressive Image Generation via Multistage Modeling

Add code
Dec 19, 2024
Viaarxiv icon

freePruner: A Training-free Approach for Large Multimodal Model Acceleration

Add code
Nov 23, 2024
Viaarxiv icon

Prompt Diffusion Robustifies Any-Modality Prompt Learning

Add code
Oct 26, 2024
Figure 1 for Prompt Diffusion Robustifies Any-Modality Prompt Learning
Figure 2 for Prompt Diffusion Robustifies Any-Modality Prompt Learning
Figure 3 for Prompt Diffusion Robustifies Any-Modality Prompt Learning
Figure 4 for Prompt Diffusion Robustifies Any-Modality Prompt Learning
Viaarxiv icon

TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models

Add code
Oct 15, 2024
Figure 1 for TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models
Figure 2 for TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models
Figure 3 for TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models
Figure 4 for TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models
Viaarxiv icon

DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture

Add code
Sep 05, 2024
Figure 1 for DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Figure 2 for DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Figure 3 for DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Figure 4 for DKDM: Data-Free Knowledge Distillation for Diffusion Models with Any Architecture
Viaarxiv icon

Distilling Long-tailed Datasets

Add code
Aug 24, 2024
Figure 1 for Distilling Long-tailed Datasets
Figure 2 for Distilling Long-tailed Datasets
Figure 3 for Distilling Long-tailed Datasets
Figure 4 for Distilling Long-tailed Datasets
Viaarxiv icon