Picture for Yu Gong

Yu Gong

A Variance Minimization Approach to Temporal-Difference Learning

Add code
Nov 10, 2024
Viaarxiv icon

MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition

Add code
Nov 01, 2024
Figure 1 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Figure 2 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Figure 3 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Figure 4 for MoE-I$^2$: Compressing Mixture of Experts Models through Inter-Expert Pruning and Intra-Expert Low-Rank Decomposition
Viaarxiv icon

ELRT: Efficient Low-Rank Training for Compact Convolutional Neural Networks

Add code
Jan 18, 2024
Viaarxiv icon

COMCAT: Towards Efficient Compression and Customization of Attention-Based Vision Models

Add code
Jun 09, 2023
Viaarxiv icon

Human-machine knowledge hybrid augmentation method for surface defect detection based few-data learning

Add code
May 02, 2023
Figure 1 for Human-machine knowledge hybrid augmentation method for surface defect detection based few-data learning
Figure 2 for Human-machine knowledge hybrid augmentation method for surface defect detection based few-data learning
Figure 3 for Human-machine knowledge hybrid augmentation method for surface defect detection based few-data learning
Figure 4 for Human-machine knowledge hybrid augmentation method for surface defect detection based few-data learning
Viaarxiv icon

HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks

Add code
Jan 20, 2023
Figure 1 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Figure 2 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Figure 3 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Figure 4 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Viaarxiv icon

Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition with Hierarchical Tucker Tensor Decomposition

Add code
Dec 05, 2022
Viaarxiv icon

RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression

Add code
May 30, 2022
Figure 1 for RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression
Figure 2 for RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression
Figure 3 for RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression
Figure 4 for RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression
Viaarxiv icon

GIFT: Graph-guIded Feature Transfer for Cold-Start Video Click-Through Rate Prediction

Add code
Feb 21, 2022
Figure 1 for GIFT: Graph-guIded Feature Transfer for Cold-Start Video Click-Through Rate Prediction
Figure 2 for GIFT: Graph-guIded Feature Transfer for Cold-Start Video Click-Through Rate Prediction
Figure 3 for GIFT: Graph-guIded Feature Transfer for Cold-Start Video Click-Through Rate Prediction
Figure 4 for GIFT: Graph-guIded Feature Transfer for Cold-Start Video Click-Through Rate Prediction
Viaarxiv icon

N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores

Add code
Dec 15, 2021
Figure 1 for N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Figure 2 for N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Figure 3 for N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Figure 4 for N3H-Core: Neuron-designed Neural Network Accelerator via FPGA-based Heterogeneous Computing Cores
Viaarxiv icon