Picture for Yuanhong Xu

Yuanhong Xu

SeA: Semantic Adversarial Augmentation for Last Layer Features from Unsupervised Representation Learning

Add code
Aug 23, 2024
Viaarxiv icon

Intra-Modal Proxy Learning for Zero-Shot Visual Categorization with CLIP

Add code
Oct 30, 2023
Viaarxiv icon

mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality

Add code
Apr 27, 2023
Figure 1 for mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Figure 2 for mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Figure 3 for mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Figure 4 for mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Viaarxiv icon

Improved Visual Fine-tuning with Natural Language Supervision

Add code
Apr 04, 2023
Figure 1 for Improved Visual Fine-tuning with Natural Language Supervision
Figure 2 for Improved Visual Fine-tuning with Natural Language Supervision
Figure 3 for Improved Visual Fine-tuning with Natural Language Supervision
Figure 4 for Improved Visual Fine-tuning with Natural Language Supervision
Viaarxiv icon

mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video

Add code
Feb 01, 2023
Viaarxiv icon

An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation

Add code
May 25, 2022
Figure 1 for An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Figure 2 for An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Figure 3 for An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Figure 4 for An Empirical Study on Distribution Shift Robustness From the Perspective of Pre-Training and Data Augmentation
Viaarxiv icon

Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice

Add code
Nov 24, 2021
Figure 1 for Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice
Figure 2 for Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice
Figure 3 for Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice
Figure 4 for Improved Fine-tuning by Leveraging Pre-training Data: Theory and Practice
Viaarxiv icon

Unsupervised Visual Representation Learning by Online Constrained K-Means

Add code
May 24, 2021
Figure 1 for Unsupervised Visual Representation Learning by Online Constrained K-Means
Figure 2 for Unsupervised Visual Representation Learning by Online Constrained K-Means
Figure 3 for Unsupervised Visual Representation Learning by Online Constrained K-Means
Figure 4 for Unsupervised Visual Representation Learning by Online Constrained K-Means
Viaarxiv icon

Towards Understanding Label Smoothing

Add code
Jun 20, 2020
Figure 1 for Towards Understanding Label Smoothing
Figure 2 for Towards Understanding Label Smoothing
Figure 3 for Towards Understanding Label Smoothing
Figure 4 for Towards Understanding Label Smoothing
Viaarxiv icon

Representation Learning with Fine-grained Patterns

Add code
May 19, 2020
Figure 1 for Representation Learning with Fine-grained Patterns
Figure 2 for Representation Learning with Fine-grained Patterns
Figure 3 for Representation Learning with Fine-grained Patterns
Figure 4 for Representation Learning with Fine-grained Patterns
Viaarxiv icon