Picture for Miao Yin

Miao Yin

AdaCM$^2$: On Understanding Extremely Long-Term Video with Adaptive Cross-Modality Memory Reduction

Add code
Nov 19, 2024
Viaarxiv icon

GaussianSpa: An "Optimizing-Sparsifying" Simplification Framework for Compact and High-Quality 3D Gaussian Splatting

Add code
Nov 09, 2024
Viaarxiv icon

MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition

Add code
Nov 01, 2024
Figure 1 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Figure 2 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Figure 3 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Figure 4 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Viaarxiv icon

Enhancing Lossy Compression Through Cross-Field Information for Scientific Applications

Add code
Sep 26, 2024
Viaarxiv icon

NeurLZ: On Enhancing Lossy Compression Performance based on Error-Controlled Neural Learning for Scientific Data

Add code
Sep 10, 2024
Viaarxiv icon

SmartMem: Layout Transformation Elimination and Adaptation for Efficient DNN Execution on Mobile

Add code
Apr 21, 2024
Figure 1 for SmartMem: Layout Transformation Elimination and Adaptation for Efficient DNN Execution on Mobile
Figure 2 for SmartMem: Layout Transformation Elimination and Adaptation for Efficient DNN Execution on Mobile
Figure 3 for SmartMem: Layout Transformation Elimination and Adaptation for Efficient DNN Execution on Mobile
Figure 4 for SmartMem: Layout Transformation Elimination and Adaptation for Efficient DNN Execution on Mobile
Viaarxiv icon

GWLZ: A Group-wise Learning-based Lossy Compression Framework for Scientific Data

Add code
Apr 20, 2024
Viaarxiv icon

ELRT: Efficient Low-Rank Training for Compact Convolutional Neural Networks

Add code
Jan 18, 2024
Viaarxiv icon

COMCAT: Towards Efficient Compression and Customization of Attention-Based Vision Models

Add code
Jun 09, 2023
Viaarxiv icon

HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks

Add code
Jan 20, 2023
Figure 1 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Figure 2 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Figure 3 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Figure 4 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Viaarxiv icon