Picture for Penghang Yin

Penghang Yin

MagR: Weight Magnitude Reduction for Enhancing Post-Training Quantization

Add code
Jun 02, 2024
Viaarxiv icon

COMQ: A Backpropagation-Free Algorithm for Post-Training Quantization

Add code
Mar 11, 2024
Viaarxiv icon

Feature Affinity Assisted Knowledge Distillation and Quantization of Deep Neural Networks on Label-Free Data

Add code
Feb 10, 2023
Viaarxiv icon

Recurrence of Optimum for Training Weight and Activation Quantized Networks

Add code
Dec 10, 2020
Figure 1 for Recurrence of Optimum for Training Weight and Activation Quantized Networks
Figure 2 for Recurrence of Optimum for Training Weight and Activation Quantized Networks
Figure 3 for Recurrence of Optimum for Training Weight and Activation Quantized Networks
Figure 4 for Recurrence of Optimum for Training Weight and Activation Quantized Networks
Viaarxiv icon

Learning Quantized Neural Nets by Coarse Gradient Method for Non-linear Classification

Add code
Nov 23, 2020
Figure 1 for Learning Quantized Neural Nets by Coarse Gradient Method for Non-linear Classification
Figure 2 for Learning Quantized Neural Nets by Coarse Gradient Method for Non-linear Classification
Figure 3 for Learning Quantized Neural Nets by Coarse Gradient Method for Non-linear Classification
Figure 4 for Learning Quantized Neural Nets by Coarse Gradient Method for Non-linear Classification
Viaarxiv icon

Global Convergence and Geometric Characterization of Slow to Fast Weight Evolution in Neural Network Training for Classifying Linearly Non-Separable Data

Add code
Mar 05, 2020
Figure 1 for Global Convergence and Geometric Characterization of Slow to Fast Weight Evolution in Neural Network Training for Classifying Linearly Non-Separable Data
Figure 2 for Global Convergence and Geometric Characterization of Slow to Fast Weight Evolution in Neural Network Training for Classifying Linearly Non-Separable Data
Figure 3 for Global Convergence and Geometric Characterization of Slow to Fast Weight Evolution in Neural Network Training for Classifying Linearly Non-Separable Data
Figure 4 for Global Convergence and Geometric Characterization of Slow to Fast Weight Evolution in Neural Network Training for Classifying Linearly Non-Separable Data
Viaarxiv icon

Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets

Add code
Mar 13, 2019
Figure 1 for Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets
Figure 2 for Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets
Figure 3 for Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets
Figure 4 for Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets
Viaarxiv icon

Non-ergodic Convergence Analysis of Heavy-Ball Algorithms

Add code
Nov 09, 2018
Figure 1 for Non-ergodic Convergence Analysis of Heavy-Ball Algorithms
Viaarxiv icon

Laplacian Smoothing Gradient Descent

Add code
Oct 17, 2018
Figure 1 for Laplacian Smoothing Gradient Descent
Figure 2 for Laplacian Smoothing Gradient Descent
Figure 3 for Laplacian Smoothing Gradient Descent
Figure 4 for Laplacian Smoothing Gradient Descent
Viaarxiv icon

Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization

Add code
Sep 23, 2018
Figure 1 for Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization
Figure 2 for Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization
Figure 3 for Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization
Figure 4 for Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization
Viaarxiv icon