Picture for Gaurav Menghani

Gaurav Menghani

LAuReL: Learned Augmented Residual Layer

Add code
Nov 13, 2024
Viaarxiv icon

SLaM: Student-Label Mixing for Semi-Supervised Knowledge Distillation

Add code
Feb 08, 2023
Figure 1 for SLaM: Student-Label Mixing for Semi-Supervised Knowledge Distillation
Figure 2 for SLaM: Student-Label Mixing for Semi-Supervised Knowledge Distillation
Figure 3 for SLaM: Student-Label Mixing for Semi-Supervised Knowledge Distillation
Figure 4 for SLaM: Student-Label Mixing for Semi-Supervised Knowledge Distillation
Viaarxiv icon

Weighted Distillation with Unlabeled Examples

Add code
Oct 13, 2022
Figure 1 for Weighted Distillation with Unlabeled Examples
Figure 2 for Weighted Distillation with Unlabeled Examples
Figure 3 for Weighted Distillation with Unlabeled Examples
Figure 4 for Weighted Distillation with Unlabeled Examples
Viaarxiv icon

Robust Active Distillation

Add code
Oct 03, 2022
Figure 1 for Robust Active Distillation
Figure 2 for Robust Active Distillation
Figure 3 for Robust Active Distillation
Figure 4 for Robust Active Distillation
Viaarxiv icon

Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better

Add code
Jun 21, 2021
Figure 1 for Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better
Figure 2 for Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better
Figure 3 for Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better
Figure 4 for Efficient Deep Learning: A Survey on Making Deep Learning Models Smaller, Faster, and Better
Viaarxiv icon

Learning from a Teacher using Unlabeled Data

Add code
Nov 13, 2019
Figure 1 for Learning from a Teacher using Unlabeled Data
Figure 2 for Learning from a Teacher using Unlabeled Data
Figure 3 for Learning from a Teacher using Unlabeled Data
Figure 4 for Learning from a Teacher using Unlabeled Data
Viaarxiv icon