Picture for Haoru Tan

Haoru Tan

MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More

Add code
Oct 08, 2024
Figure 1 for MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More
Figure 2 for MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More
Figure 3 for MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More
Figure 4 for MC-MoE: Mixture Compressor for Mixture-of-Experts LLMs Gains More
Viaarxiv icon

Differentiable Proximal Graph Matching

Add code
May 26, 2024
Figure 1 for Differentiable Proximal Graph Matching
Figure 2 for Differentiable Proximal Graph Matching
Figure 3 for Differentiable Proximal Graph Matching
Viaarxiv icon

Ensemble Quadratic Assignment Network for Graph Matching

Add code
Mar 11, 2024
Viaarxiv icon

Debiasing Text-to-Image Diffusion Models

Add code
Feb 22, 2024
Viaarxiv icon

Data Pruning via Moving-one-Sample-out

Add code
Oct 25, 2023
Viaarxiv icon

Semantic Diffusion Network for Semantic Segmentation

Add code
Feb 04, 2023
Viaarxiv icon

Vertical Layering of Quantized Neural Networks for Heterogeneous Inference

Add code
Dec 10, 2022
Viaarxiv icon

Pale Transformer: A General Vision Transformer Backbone with Pale-Shaped Attention

Add code
Dec 28, 2021
Figure 1 for Pale Transformer: A General Vision Transformer Backbone with Pale-Shaped Attention
Figure 2 for Pale Transformer: A General Vision Transformer Backbone with Pale-Shaped Attention
Figure 3 for Pale Transformer: A General Vision Transformer Backbone with Pale-Shaped Attention
Figure 4 for Pale Transformer: A General Vision Transformer Backbone with Pale-Shaped Attention
Viaarxiv icon