Picture for Jinqi Xiao

Jinqi Xiao

COAP: Memory-Efficient Training with Correlation-Aware Gradient Projection

Add code
Nov 26, 2024
Viaarxiv icon

MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition

Add code
Nov 01, 2024
Figure 1 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Figure 2 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Figure 3 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Figure 4 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Viaarxiv icon

DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models

Add code
Feb 05, 2024
Viaarxiv icon

ELRT: Efficient Low-Rank Training for Compact Convolutional Neural Networks

Add code
Jan 18, 2024
Viaarxiv icon

COMCAT: Towards Efficient Compression and Customization of Attention-Based Vision Models

Add code
Jun 09, 2023
Viaarxiv icon

HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks

Add code
Jan 20, 2023
Figure 1 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Figure 2 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Figure 3 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Figure 4 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Viaarxiv icon