Picture for Raghu Mudumbai

Raghu Mudumbai

Towards unlocking the mystery of adversarial fragility of neural networks

Add code
Jun 23, 2024
Viaarxiv icon

Slaves to the Law of Large Numbers: An Asymptotic Equipartition Property for Perplexity in Generative Language Models

Add code
May 22, 2024
Viaarxiv icon

Linear Progressive Coding for Semantic Communication using Deep Neural Networks

Add code
Sep 27, 2023
Figure 1 for Linear Progressive Coding for Semantic Communication using Deep Neural Networks
Figure 2 for Linear Progressive Coding for Semantic Communication using Deep Neural Networks
Figure 3 for Linear Progressive Coding for Semantic Communication using Deep Neural Networks
Figure 4 for Linear Progressive Coding for Semantic Communication using Deep Neural Networks
Viaarxiv icon

Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning

Add code
Jul 28, 2020
Figure 1 for Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning
Figure 2 for Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning
Figure 3 for Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning
Figure 4 for Derivation of Information-Theoretically Optimal Adversarial Attacks with Applications to Robust Machine Learning
Viaarxiv icon

Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks

Add code
Mar 26, 2020
Figure 1 for Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Figure 2 for Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Figure 3 for Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Figure 4 for Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Viaarxiv icon

An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers

Add code
Jan 27, 2019
Figure 1 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Figure 2 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Figure 3 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Figure 4 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Viaarxiv icon