Picture for Yen-Chang Hsu

Yen-Chang Hsu

DISP-LLM: Dimension-Independent Structural Pruning for Large Language Models

Add code
Oct 15, 2024
Viaarxiv icon

Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering

Add code
Oct 11, 2024
Figure 1 for Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering
Figure 2 for Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering
Figure 3 for Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering
Figure 4 for Retraining-Free Merging of Sparse Mixture-of-Experts via Hierarchical Clustering
Viaarxiv icon

MoDeGPT: Modular Decomposition for Large Language Model Compression

Add code
Aug 20, 2024
Viaarxiv icon

DynaMo: Accelerating Language Model Inference with Dynamic Multi-Token Sampling

Add code
May 01, 2024
Viaarxiv icon

Token Fusion: Bridging the Gap between Token Pruning and Token Merging

Add code
Dec 02, 2023
Viaarxiv icon

Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters

Add code
Nov 30, 2023
Viaarxiv icon

Training Energy-Based Normalizing Flow with Score-Matching Objectives

Add code
May 24, 2023
Viaarxiv icon

Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA

Add code
Apr 12, 2023
Viaarxiv icon

Numerical Optimizations for Weighted Low-rank Estimation on Language Model

Add code
Nov 02, 2022
Viaarxiv icon

Language model compression with weighted low-rank factorization

Add code
Jun 30, 2022
Figure 1 for Language model compression with weighted low-rank factorization
Figure 2 for Language model compression with weighted low-rank factorization
Figure 3 for Language model compression with weighted low-rank factorization
Figure 4 for Language model compression with weighted low-rank factorization
Viaarxiv icon