Picture for Lingyi Huang

Lingyi Huang

MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition

Add code
Nov 01, 2024
Figure 1 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Figure 2 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Figure 3 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Figure 4 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Viaarxiv icon

In-Sensor Radio Frequency Computing for Energy-Efficient Intelligent Radar

Add code
Dec 16, 2023
Viaarxiv icon

Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition with Hierarchical Tucker Tensor Decomposition

Add code
Dec 05, 2022
Viaarxiv icon

Robot Motion Planning as Video Prediction: A Spatio-Temporal Neural Network-based Motion Planner

Add code
Aug 24, 2022
Figure 1 for Robot Motion Planning as Video Prediction: A Spatio-Temporal Neural Network-based Motion Planner
Figure 2 for Robot Motion Planning as Video Prediction: A Spatio-Temporal Neural Network-based Motion Planner
Figure 3 for Robot Motion Planning as Video Prediction: A Spatio-Temporal Neural Network-based Motion Planner
Figure 4 for Robot Motion Planning as Video Prediction: A Spatio-Temporal Neural Network-based Motion Planner
Viaarxiv icon