Picture for An Nguyen

An Nguyen

How Intermodal Interaction Affects the Performance of Deep Multimodal Fusion for Mixed-Type Time Series

Add code
Jun 21, 2024
Viaarxiv icon

Mixture of Experts Meets Prompt-Based Continual Learning

Add code
May 23, 2024
Viaarxiv icon

Just a Matter of Scale? Reevaluating Scale Equivariance in Convolutional Neural Networks

Add code
Nov 18, 2022
Viaarxiv icon

Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification

Add code
May 19, 2022
Figure 1 for Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification
Figure 2 for Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification
Figure 3 for Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification
Figure 4 for Improving Robustness against Real-World and Worst-Case Distribution Shifts through Decision Region Quantification
Viaarxiv icon

Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI

Add code
May 11, 2022
Figure 1 for Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Figure 2 for Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Figure 3 for Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Figure 4 for Explain to Not Forget: Defending Against Catastrophic Forgetting with XAI
Viaarxiv icon

Language Model Evaluation in Open-ended Text Generation

Add code
Aug 08, 2021
Figure 1 for Language Model Evaluation in Open-ended Text Generation
Figure 2 for Language Model Evaluation in Open-ended Text Generation
Figure 3 for Language Model Evaluation in Open-ended Text Generation
Figure 4 for Language Model Evaluation in Open-ended Text Generation
Viaarxiv icon

Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks

Add code
May 25, 2021
Figure 1 for Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks
Figure 2 for Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks
Figure 3 for Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks
Figure 4 for Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks
Viaarxiv icon

Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis

Add code
Feb 24, 2021
Figure 1 for Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis
Figure 2 for Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis
Figure 3 for Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis
Figure 4 for Identifying Untrustworthy Predictions in Neural Networks by Geometric Gradient Analysis
Viaarxiv icon

System Design for a Data-driven and Explainable Customer Sentiment Monitor

Add code
Jan 11, 2021
Figure 1 for System Design for a Data-driven and Explainable Customer Sentiment Monitor
Figure 2 for System Design for a Data-driven and Explainable Customer Sentiment Monitor
Figure 3 for System Design for a Data-driven and Explainable Customer Sentiment Monitor
Figure 4 for System Design for a Data-driven and Explainable Customer Sentiment Monitor
Viaarxiv icon

Sampled Nonlocal Gradients for Stronger Adversarial Attacks

Add code
Nov 05, 2020
Figure 1 for Sampled Nonlocal Gradients for Stronger Adversarial Attacks
Figure 2 for Sampled Nonlocal Gradients for Stronger Adversarial Attacks
Figure 3 for Sampled Nonlocal Gradients for Stronger Adversarial Attacks
Figure 4 for Sampled Nonlocal Gradients for Stronger Adversarial Attacks
Viaarxiv icon