Picture for Matthew C. Stamm

Matthew C. Stamm

Beyond Deepfake Images: Detecting AI-Generated Videos

Add code
Apr 24, 2024
Viaarxiv icon

E3: Ensemble of Expert Embedders for Adapting Synthetic Image Detectors to New Generators Using Limited Data

Add code
Apr 12, 2024
Viaarxiv icon

Open Set Synthetic Image Source Attribution

Add code
Aug 22, 2023
Viaarxiv icon

Comprehensive Dataset of Synthetic and Manipulated Overhead Imagery for Development and Evaluation of Forensic Tools

Add code
May 09, 2023
Viaarxiv icon

VideoFACT: Detecting Video Forgeries Using Attention, Scene Context, and Forensic Traces

Add code
Nov 28, 2022
Viaarxiv icon

Making GAN-Generated Images Difficult To Spot: A New Attack Against Synthetic Image Detectors

Add code
Apr 25, 2021
Figure 1 for Making GAN-Generated Images Difficult To Spot: A New Attack Against Synthetic Image Detectors
Figure 2 for Making GAN-Generated Images Difficult To Spot: A New Attack Against Synthetic Image Detectors
Figure 3 for Making GAN-Generated Images Difficult To Spot: A New Attack Against Synthetic Image Detectors
Figure 4 for Making GAN-Generated Images Difficult To Spot: A New Attack Against Synthetic Image Detectors
Viaarxiv icon

The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs

Add code
Jan 26, 2021
Figure 1 for The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs
Figure 2 for The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs
Figure 3 for The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs
Figure 4 for The Effect of Class Definitions on the Transferability of Adversarial Attacks Against Forensic CNNs
Viaarxiv icon

Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers

Add code
Jan 26, 2021
Figure 1 for Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
Figure 2 for Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
Figure 3 for Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
Figure 4 for Defenses Against Multi-Sticker Physical Domain Attacks on Classifiers
Viaarxiv icon

A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network

Add code
Jan 23, 2021
Figure 1 for A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network
Figure 2 for A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network
Figure 3 for A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network
Figure 4 for A Transferable Anti-Forensic Attack on Forensic CNNs Using A Generative Adversarial Network
Viaarxiv icon

Exposing Fake Images with Forensic Similarity Graphs

Add code
Dec 05, 2019
Figure 1 for Exposing Fake Images with Forensic Similarity Graphs
Figure 2 for Exposing Fake Images with Forensic Similarity Graphs
Figure 3 for Exposing Fake Images with Forensic Similarity Graphs
Figure 4 for Exposing Fake Images with Forensic Similarity Graphs
Viaarxiv icon