Picture for Kanya Mo

Kanya Mo

Zhejiang University, UIUC Institute

Up to 100x Faster Data-free Knowledge Distillation

Add code
Dec 12, 2021
Figure 1 for Up to 100x Faster Data-free Knowledge Distillation
Figure 2 for Up to 100x Faster Data-free Knowledge Distillation
Figure 3 for Up to 100x Faster Data-free Knowledge Distillation
Figure 4 for Up to 100x Faster Data-free Knowledge Distillation
Viaarxiv icon

Exploiting Spline Models for the Training of Fully Connected Layers in Neural Network

Add code
Feb 12, 2021
Figure 1 for Exploiting Spline Models for the Training of Fully Connected Layers in Neural Network
Figure 2 for Exploiting Spline Models for the Training of Fully Connected Layers in Neural Network
Figure 3 for Exploiting Spline Models for the Training of Fully Connected Layers in Neural Network
Figure 4 for Exploiting Spline Models for the Training of Fully Connected Layers in Neural Network
Viaarxiv icon