Picture for Ahmad Sajedi

Ahmad Sajedi

Data-to-Model Distillation: Data-Efficient Learning Framework

Add code
Nov 19, 2024
Viaarxiv icon

Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios

Add code
Oct 22, 2024
Viaarxiv icon

GSTAM: Efficient Graph Distillation with Structural Attention-Matching

Add code
Aug 29, 2024
Viaarxiv icon

ATOM: Attention Mixer for Efficient Dataset Distillation

Add code
May 02, 2024
Viaarxiv icon

ProbMCL: Simple Probabilistic Contrastive Learning for Multi-label Visual Classification

Add code
Jan 02, 2024
Viaarxiv icon

DataDAM: Efficient Dataset Distillation with Attention Matching

Add code
Sep 29, 2023
Viaarxiv icon

End-to-End Supervised Multilabel Contrastive Learning

Add code
Jul 08, 2023
Viaarxiv icon

A New Probabilistic Distance Metric With Application In Gaussian Mixture Reduction

Add code
Jun 12, 2023
Viaarxiv icon

Subclass Knowledge Distillation with Known Subclass Labels

Add code
Jul 17, 2022
Figure 1 for Subclass Knowledge Distillation with Known Subclass Labels
Figure 2 for Subclass Knowledge Distillation with Known Subclass Labels
Figure 3 for Subclass Knowledge Distillation with Known Subclass Labels
Figure 4 for Subclass Knowledge Distillation with Known Subclass Labels
Viaarxiv icon

On the Efficiency of Subclass Knowledge Distillation in Classification Tasks

Add code
Sep 12, 2021
Figure 1 for On the Efficiency of Subclass Knowledge Distillation in Classification Tasks
Figure 2 for On the Efficiency of Subclass Knowledge Distillation in Classification Tasks
Figure 3 for On the Efficiency of Subclass Knowledge Distillation in Classification Tasks
Figure 4 for On the Efficiency of Subclass Knowledge Distillation in Classification Tasks
Viaarxiv icon