Picture for Arjun Nitin Bhagoji

Arjun Nitin Bhagoji

MYCROFT: Towards Effective and Efficient External Data Augmentation

Add code
Oct 11, 2024
Figure 1 for MYCROFT: Towards Effective and Efficient External Data Augmentation
Figure 2 for MYCROFT: Towards Effective and Efficient External Data Augmentation
Figure 3 for MYCROFT: Towards Effective and Efficient External Data Augmentation
Figure 4 for MYCROFT: Towards Effective and Efficient External Data Augmentation
Viaarxiv icon

Towards Scalable and Robust Model Versioning

Add code
Jan 17, 2024
Viaarxiv icon

Characterizing the Optimal 0-1 Loss for Multi-class Classification with a Test-time Attacker

Add code
Feb 21, 2023
Viaarxiv icon

Augmenting Rule-based DNS Censorship Detection at Scale with Machine Learning

Add code
Feb 03, 2023
Viaarxiv icon

Natural Backdoor Datasets

Add code
Jun 21, 2022
Figure 1 for Natural Backdoor Datasets
Figure 2 for Natural Backdoor Datasets
Figure 3 for Natural Backdoor Datasets
Figure 4 for Natural Backdoor Datasets
Viaarxiv icon

Understanding Robust Learning through the Lens of Representation Similarities

Add code
Jun 20, 2022
Figure 1 for Understanding Robust Learning through the Lens of Representation Similarities
Figure 2 for Understanding Robust Learning through the Lens of Representation Similarities
Figure 3 for Understanding Robust Learning through the Lens of Representation Similarities
Figure 4 for Understanding Robust Learning through the Lens of Representation Similarities
Viaarxiv icon

Can Backdoor Attacks Survive Time-Varying Models?

Add code
Jun 08, 2022
Figure 1 for Can Backdoor Attacks Survive Time-Varying Models?
Figure 2 for Can Backdoor Attacks Survive Time-Varying Models?
Figure 3 for Can Backdoor Attacks Survive Time-Varying Models?
Figure 4 for Can Backdoor Attacks Survive Time-Varying Models?
Viaarxiv icon

Traceback of Data Poisoning Attacks in Neural Networks

Add code
Oct 13, 2021
Figure 1 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 2 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 3 for Traceback of Data Poisoning Attacks in Neural Networks
Figure 4 for Traceback of Data Poisoning Attacks in Neural Networks
Viaarxiv icon

Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries

Add code
Apr 16, 2021
Figure 1 for Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Figure 2 for Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Figure 3 for Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Figure 4 for Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
Viaarxiv icon

A Critical Evaluation of Open-World Machine Learning

Add code
Jul 08, 2020
Figure 1 for A Critical Evaluation of Open-World Machine Learning
Figure 2 for A Critical Evaluation of Open-World Machine Learning
Figure 3 for A Critical Evaluation of Open-World Machine Learning
Figure 4 for A Critical Evaluation of Open-World Machine Learning
Viaarxiv icon