Picture for Yoonho Boo

Yoonho Boo

Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks

Add code
Sep 30, 2020
Figure 1 for Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Figure 2 for Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Figure 3 for Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Figure 4 for Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks
Viaarxiv icon

Quantized Neural Networks: Characterization and Holistic Optimization

Add code
May 31, 2020
Figure 1 for Quantized Neural Networks: Characterization and Holistic Optimization
Figure 2 for Quantized Neural Networks: Characterization and Holistic Optimization
Figure 3 for Quantized Neural Networks: Characterization and Holistic Optimization
Figure 4 for Quantized Neural Networks: Characterization and Holistic Optimization
Viaarxiv icon

SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks

Add code
Feb 02, 2020
Figure 1 for SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks
Figure 2 for SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks
Figure 3 for SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks
Figure 4 for SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks
Viaarxiv icon

Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks

Add code
Oct 05, 2019
Figure 1 for Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks
Figure 2 for Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks
Figure 3 for Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks
Figure 4 for Empirical Analysis of Knowledge Distillation Technique for Optimization of Quantized Deep Neural Networks
Viaarxiv icon

Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations

Add code
Jul 01, 2017
Figure 1 for Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations
Figure 2 for Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations
Figure 3 for Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations
Figure 4 for Structured Sparse Ternary Weight Coding of Deep Neural Networks for Efficient Hardware Implementations
Viaarxiv icon

Fixed-point optimization of deep neural networks with adaptive step size retraining

Add code
Feb 27, 2017
Figure 1 for Fixed-point optimization of deep neural networks with adaptive step size retraining
Figure 2 for Fixed-point optimization of deep neural networks with adaptive step size retraining
Figure 3 for Fixed-point optimization of deep neural networks with adaptive step size retraining
Figure 4 for Fixed-point optimization of deep neural networks with adaptive step size retraining
Viaarxiv icon