Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation

Add code
May 08, 2023
Figure 1 for Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation
Figure 2 for Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation
Figure 3 for Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation
Figure 4 for Do Not Blindly Imitate the Teacher: Using Perturbed Loss for Knowledge Distillation

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: