Picture for Shiwei Liu

Shiwei Liu

Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More

Add code
Feb 11, 2025
Viaarxiv icon

The Curse of Depth in Large Language Models

Add code
Feb 09, 2025
Viaarxiv icon

SPAM: Spike-Aware Adam with Momentum Reset for Stable LLM Training

Add code
Jan 12, 2025
Viaarxiv icon

Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN

Add code
Dec 18, 2024
Viaarxiv icon

SIDE: Socially Informed Drought Estimation Toward Understanding Societal Impact Dynamics of Environmental Crisis

Add code
Dec 17, 2024
Viaarxiv icon

Condense, Don't Just Prune: Enhancing Efficiency and Performance in MoE Layer Pruning

Add code
Nov 26, 2024
Figure 1 for Condense, Don't Just Prune: Enhancing Efficiency and Performance in MoE Layer Pruning
Figure 2 for Condense, Don't Just Prune: Enhancing Efficiency and Performance in MoE Layer Pruning
Figure 3 for Condense, Don't Just Prune: Enhancing Efficiency and Performance in MoE Layer Pruning
Figure 4 for Condense, Don't Just Prune: Enhancing Efficiency and Performance in MoE Layer Pruning
Viaarxiv icon

AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models

Add code
Oct 14, 2024
Figure 1 for AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Figure 2 for AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Figure 3 for AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Figure 4 for AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models
Viaarxiv icon

Full-Rank No More: Low-Rank Weight Training for Modern Speech Recognition Models

Add code
Oct 10, 2024
Figure 1 for Full-Rank No More: Low-Rank Weight Training for Modern Speech Recognition Models
Figure 2 for Full-Rank No More: Low-Rank Weight Training for Modern Speech Recognition Models
Figure 3 for Full-Rank No More: Low-Rank Weight Training for Modern Speech Recognition Models
Figure 4 for Full-Rank No More: Low-Rank Weight Training for Modern Speech Recognition Models
Viaarxiv icon

Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning

Add code
Oct 09, 2024
Figure 1 for Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning
Figure 2 for Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning
Figure 3 for Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning
Figure 4 for Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning
Viaarxiv icon

(PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork

Add code
Jul 24, 2024
Figure 1 for (PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork
Figure 2 for (PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork
Figure 3 for (PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork
Figure 4 for (PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent HyperNetwork
Viaarxiv icon