VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification

Add code
May 24, 2022
Figure 1 for VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification
Figure 2 for VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification
Figure 3 for VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification
Figure 4 for VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: