Picture for Dapeng Hu

Dapeng Hu

Climate AI for Corporate Decarbonization Metrics Extraction

Add code
Nov 05, 2024
Viaarxiv icon

PseudoCal: A Source-Free Approach to Unsupervised Uncertainty Calibration in Domain Adaptation

Add code
Jul 14, 2023
Viaarxiv icon

UMAD: Universal Model Adaptation under Domain and Category Shift

Add code
Dec 16, 2021
Figure 1 for UMAD: Universal Model Adaptation under Domain and Category Shift
Figure 2 for UMAD: Universal Model Adaptation under Domain and Category Shift
Figure 3 for UMAD: Universal Model Adaptation under Domain and Category Shift
Figure 4 for UMAD: Universal Model Adaptation under Domain and Category Shift
Viaarxiv icon

No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data

Add code
Jun 09, 2021
Figure 1 for No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data
Figure 2 for No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data
Figure 3 for No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data
Figure 4 for No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data
Viaarxiv icon

How Well Self-Supervised Pre-Training Performs with Streaming Data?

Add code
Apr 25, 2021
Figure 1 for How Well Self-Supervised Pre-Training Performs with Streaming Data?
Figure 2 for How Well Self-Supervised Pre-Training Performs with Streaming Data?
Figure 3 for How Well Self-Supervised Pre-Training Performs with Streaming Data?
Figure 4 for How Well Self-Supervised Pre-Training Performs with Streaming Data?
Viaarxiv icon

Distill and Fine-tune: Effective Adaptation from a Black-box Source Model

Add code
Apr 04, 2021
Figure 1 for Distill and Fine-tune: Effective Adaptation from a Black-box Source Model
Figure 2 for Distill and Fine-tune: Effective Adaptation from a Black-box Source Model
Figure 3 for Distill and Fine-tune: Effective Adaptation from a Black-box Source Model
Figure 4 for Distill and Fine-tune: Effective Adaptation from a Black-box Source Model
Viaarxiv icon

Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning

Add code
Feb 12, 2021
Figure 1 for Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Figure 2 for Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Figure 3 for Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Figure 4 for Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Viaarxiv icon

Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer

Add code
Dec 14, 2020
Figure 1 for Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer
Figure 2 for Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer
Figure 3 for Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer
Figure 4 for Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer
Viaarxiv icon

Combating Domain Shift with Self-Taught Labeling

Add code
Jul 08, 2020
Figure 1 for Combating Domain Shift with Self-Taught Labeling
Figure 2 for Combating Domain Shift with Self-Taught Labeling
Figure 3 for Combating Domain Shift with Self-Taught Labeling
Figure 4 for Combating Domain Shift with Self-Taught Labeling
Viaarxiv icon

PANDA: Prototypical Unsupervised Domain Adaptation

Add code
Apr 12, 2020
Figure 1 for PANDA: Prototypical Unsupervised Domain Adaptation
Figure 2 for PANDA: Prototypical Unsupervised Domain Adaptation
Figure 3 for PANDA: Prototypical Unsupervised Domain Adaptation
Figure 4 for PANDA: Prototypical Unsupervised Domain Adaptation
Viaarxiv icon