Picture for Thu Dinh

Thu Dinh

Quantization-Guided Training for Compact TinyML Models

Add code
Mar 10, 2021
Figure 1 for Quantization-Guided Training for Compact TinyML Models
Figure 2 for Quantization-Guided Training for Compact TinyML Models
Figure 3 for Quantization-Guided Training for Compact TinyML Models
Figure 4 for Quantization-Guided Training for Compact TinyML Models
Viaarxiv icon

Subtensor Quantization for Mobilenets

Add code
Nov 04, 2020
Figure 1 for Subtensor Quantization for Mobilenets
Figure 2 for Subtensor Quantization for Mobilenets
Viaarxiv icon

Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets

Add code
Mar 02, 2020
Figure 1 for Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets
Figure 2 for Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets
Figure 3 for Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets
Figure 4 for Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets
Viaarxiv icon

Convergence of a Relaxed Variable Splitting Coarse Gradient Descent Method for Learning Sparse Weight Binarized Activation Neural Networks

Add code
Feb 09, 2019
Figure 1 for Convergence of a Relaxed Variable Splitting Coarse Gradient Descent Method for Learning Sparse Weight Binarized Activation Neural Networks
Figure 2 for Convergence of a Relaxed Variable Splitting Coarse Gradient Descent Method for Learning Sparse Weight Binarized Activation Neural Networks
Figure 3 for Convergence of a Relaxed Variable Splitting Coarse Gradient Descent Method for Learning Sparse Weight Binarized Activation Neural Networks
Figure 4 for Convergence of a Relaxed Variable Splitting Coarse Gradient Descent Method for Learning Sparse Weight Binarized Activation Neural Networks
Viaarxiv icon