Less is More: Task-aware Layer-wise Distillation for Language Model Compression

Add code
Oct 05, 2022
Figure 1 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 2 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 3 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression
Figure 4 for Less is More: Task-aware Layer-wise Distillation for Language Model Compression

Share this with someone who'll enjoy it:

View paper onarxiv iconopen_review iconOpenReview

Share this with someone who'll enjoy it: