Picture for Mong Li Lee

Mong Li Lee

ChronoFact: Timeline-based Temporal Fact Verification

Add code
Oct 19, 2024
Viaarxiv icon

Evidence-Based Temporal Fact Verification

Add code
Jul 21, 2024
Viaarxiv icon

Cross-Domain Feature Augmentation for Domain Generalization

Add code
May 14, 2024
Viaarxiv icon

SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection

Add code
Mar 05, 2024
Viaarxiv icon

Leveraging Old Knowledge to Continually Learn New Classes in Medical Images

Add code
Mar 24, 2023
Viaarxiv icon

Distributional Shifts in Automated Diabetic Retinopathy Screening

Add code
Jul 25, 2021
Figure 1 for Distributional Shifts in Automated Diabetic Retinopathy Screening
Figure 2 for Distributional Shifts in Automated Diabetic Retinopathy Screening
Figure 3 for Distributional Shifts in Automated Diabetic Retinopathy Screening
Figure 4 for Distributional Shifts in Automated Diabetic Retinopathy Screening
Viaarxiv icon

Towards Fully Interpretable Deep Neural Networks: Are We There Yet?

Add code
Jun 24, 2021
Figure 1 for Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Figure 2 for Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Figure 3 for Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Viaarxiv icon

Adversarially Robust Classifier with Covariate Shift Adaptation

Add code
Feb 09, 2021
Figure 1 for Adversarially Robust Classifier with Covariate Shift Adaptation
Figure 2 for Adversarially Robust Classifier with Covariate Shift Adaptation
Figure 3 for Adversarially Robust Classifier with Covariate Shift Adaptation
Figure 4 for Adversarially Robust Classifier with Covariate Shift Adaptation
Viaarxiv icon

Learning Semantically Meaningful Features for Interpretable Classifications

Add code
Jan 11, 2021
Figure 1 for Learning Semantically Meaningful Features for Interpretable Classifications
Figure 2 for Learning Semantically Meaningful Features for Interpretable Classifications
Figure 3 for Learning Semantically Meaningful Features for Interpretable Classifications
Figure 4 for Learning Semantically Meaningful Features for Interpretable Classifications
Viaarxiv icon

Towards Maximizing the Representation Gap between In-Domain \& Out-of-Distribution Examples

Add code
Oct 20, 2020
Figure 1 for Towards Maximizing the Representation Gap between In-Domain \& Out-of-Distribution Examples
Figure 2 for Towards Maximizing the Representation Gap between In-Domain \& Out-of-Distribution Examples
Figure 3 for Towards Maximizing the Representation Gap between In-Domain \& Out-of-Distribution Examples
Figure 4 for Towards Maximizing the Representation Gap between In-Domain \& Out-of-Distribution Examples
Viaarxiv icon