Picture for Jinmian Ye

Jinmian Ye

Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks

Add code
Sep 18, 2019
Figure 1 for Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks
Figure 2 for Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks
Figure 3 for Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks
Figure 4 for Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks
Viaarxiv icon

Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition

Add code
Nov 19, 2018
Figure 1 for Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition
Figure 2 for Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition
Figure 3 for Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition
Figure 4 for Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition
Viaarxiv icon

Adversarial Noise Layer: Regularize Neural Network By Adding Noise

Add code
Oct 30, 2018
Figure 1 for Adversarial Noise Layer: Regularize Neural Network By Adding Noise
Figure 2 for Adversarial Noise Layer: Regularize Neural Network By Adding Noise
Figure 3 for Adversarial Noise Layer: Regularize Neural Network By Adding Noise
Figure 4 for Adversarial Noise Layer: Regularize Neural Network By Adding Noise
Viaarxiv icon

Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition

Add code
May 11, 2018
Figure 1 for Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Figure 2 for Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Figure 3 for Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Figure 4 for Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition
Viaarxiv icon

SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

Add code
Jan 13, 2018
Figure 1 for SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Figure 2 for SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Figure 3 for SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Figure 4 for SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Viaarxiv icon

BT-Nets: Simplifying Deep Neural Networks via Block Term Decomposition

Add code
Dec 15, 2017
Figure 1 for BT-Nets: Simplifying Deep Neural Networks via Block Term Decomposition
Figure 2 for BT-Nets: Simplifying Deep Neural Networks via Block Term Decomposition
Figure 3 for BT-Nets: Simplifying Deep Neural Networks via Block Term Decomposition
Figure 4 for BT-Nets: Simplifying Deep Neural Networks via Block Term Decomposition
Viaarxiv icon

Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization

Add code
Nov 11, 2016
Figure 1 for Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization
Figure 2 for Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization
Figure 3 for Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization
Figure 4 for Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization
Viaarxiv icon