Picture for Hui Xie

Hui Xie

Towards unlocking the mystery of adversarial fragility of neural networks

Add code
Jun 23, 2024
Viaarxiv icon

Distance Guided Generative Adversarial Network for Explainable Binary Classifications

Add code
Dec 29, 2023
Viaarxiv icon

gcDLSeg: Integrating Graph-cut into Deep Learning for Binary Semantic Segmentation

Add code
Dec 07, 2023
Viaarxiv icon

A deep learning network with differentiable dynamic programming for retina OCT surface segmentation

Add code
Oct 08, 2022
Figure 1 for A deep learning network with differentiable dynamic programming for retina OCT surface segmentation
Figure 2 for A deep learning network with differentiable dynamic programming for retina OCT surface segmentation
Figure 3 for A deep learning network with differentiable dynamic programming for retina OCT surface segmentation
Figure 4 for A deep learning network with differentiable dynamic programming for retina OCT surface segmentation
Viaarxiv icon

End to end hyperspectral imaging system with coded compression imaging process

Add code
Sep 06, 2021
Figure 1 for End to end hyperspectral imaging system with coded compression imaging process
Figure 2 for End to end hyperspectral imaging system with coded compression imaging process
Figure 3 for End to end hyperspectral imaging system with coded compression imaging process
Figure 4 for End to end hyperspectral imaging system with coded compression imaging process
Viaarxiv icon

Globally Optimal Segmentation of Mutually Interacting Surfaces using Deep Learning

Add code
Jul 15, 2020
Figure 1 for Globally Optimal Segmentation of Mutually Interacting Surfaces using Deep Learning
Figure 2 for Globally Optimal Segmentation of Mutually Interacting Surfaces using Deep Learning
Figure 3 for Globally Optimal Segmentation of Mutually Interacting Surfaces using Deep Learning
Figure 4 for Globally Optimal Segmentation of Mutually Interacting Surfaces using Deep Learning
Viaarxiv icon

Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks

Add code
May 25, 2019
Figure 1 for Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Figure 2 for Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Figure 3 for Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Figure 4 for Trust but Verify: An Information-Theoretic Explanation for the Adversarial Fragility of Machine Learning Systems, and a General Defense against Adversarial Attacks
Viaarxiv icon

An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers

Add code
Jan 27, 2019
Figure 1 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Figure 2 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Figure 3 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Figure 4 for An Information-Theoretic Explanation for the Adversarial Fragility of AI Classifiers
Viaarxiv icon