Picture for Jirong Yi

Jirong Yi

Towards unlocking the mystery of adversarial fragility of neural networks

Add code
Jun 23, 2024
Viaarxiv icon

Outlier Detection Using Generative Models with Theoretical Performance Guarantees

Add code
Oct 16, 2023
Figure 1 for Outlier Detection Using Generative Models with Theoretical Performance Guarantees
Figure 2 for Outlier Detection Using Generative Models with Theoretical Performance Guarantees
Figure 3 for Outlier Detection Using Generative Models with Theoretical Performance Guarantees
Figure 4 for Outlier Detection Using Generative Models with Theoretical Performance Guarantees
Viaarxiv icon

Mutual Information Learned Regressor: an Information-theoretic Viewpoint of Training Regression Systems

Add code
Nov 23, 2022
Viaarxiv icon

Mutual Information Learned Classifiers: an Information-theoretic Viewpoint of Training Deep Learning Classification Systems

Add code
Oct 03, 2022
Figure 1 for Mutual Information Learned Classifiers: an Information-theoretic Viewpoint of Training Deep Learning Classification Systems
Figure 2 for Mutual Information Learned Classifiers: an Information-theoretic Viewpoint of Training Deep Learning Classification Systems
Figure 3 for Mutual Information Learned Classifiers: an Information-theoretic Viewpoint of Training Deep Learning Classification Systems
Figure 4 for Mutual Information Learned Classifiers: an Information-theoretic Viewpoint of Training Deep Learning Classification Systems
Viaarxiv icon

Solving Large Scale Quadratic Constrained Basis Pursuit

Add code
Apr 02, 2021
Figure 1 for Solving Large Scale Quadratic Constrained Basis Pursuit
Figure 2 for Solving Large Scale Quadratic Constrained Basis Pursuit
Viaarxiv icon

Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning

Add code
Jul 28, 2020
Figure 1 for Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning
Figure 2 for Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning
Figure 3 for Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning
Figure 4 for Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning
Viaarxiv icon

Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks

Add code
Mar 26, 2020
Figure 1 for Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Figure 2 for Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Figure 3 for Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Figure 4 for Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Viaarxiv icon

Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks

Add code
May 25, 2019
Figure 1 for Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Figure 2 for Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Figure 3 for Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Figure 4 for Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Viaarxiv icon

An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers

Add code
Jan 27, 2019
Figure 1 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Figure 2 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Figure 3 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Figure 4 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Viaarxiv icon

Outlier Detection using Generative Models with Theoretical Performance Guarantees

Add code
Oct 26, 2018
Figure 1 for Outlier Detection using Generative Models with Theoretical Performance Guarantees
Figure 2 for Outlier Detection using Generative Models with Theoretical Performance Guarantees
Figure 3 for Outlier Detection using Generative Models with Theoretical Performance Guarantees
Figure 4 for Outlier Detection using Generative Models with Theoretical Performance Guarantees
Viaarxiv icon