KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation

Add code
Sep 22, 2021
Figure 1 for KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation
Figure 2 for KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation
Figure 3 for KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation
Figure 4 for KD-VLP: Improving End-to-End Vision-and-Language Pretraining with Object Knowledge Distillation

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: