Picture for Earlence Fernandes

Earlence Fernandes

Misusing Tools in Large Language Models With Visual Adversarial Examples

Add code
Oct 04, 2023
Viaarxiv icon

SkillFence: A Systems Approach to Practically Mitigating Voice-Based Confusion Attacks

Add code
Dec 16, 2022
Viaarxiv icon

Re-purposing Perceptual Hashing based Client Side Scanning for Physical Surveillance

Add code
Dec 08, 2022
Viaarxiv icon

Exploring Adversarial Robustness of Deep Metric Learning

Add code
Feb 14, 2021
Figure 1 for Exploring Adversarial Robustness of Deep Metric Learning
Figure 2 for Exploring Adversarial Robustness of Deep Metric Learning
Figure 3 for Exploring Adversarial Robustness of Deep Metric Learning
Figure 4 for Exploring Adversarial Robustness of Deep Metric Learning
Viaarxiv icon

Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems

Add code
Dec 16, 2020
Figure 1 for Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems
Figure 2 for Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems
Figure 3 for Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems
Figure 4 for Sequential Attacks on Kalman Filter-based Forward Collision Warning Systems
Viaarxiv icon

Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect

Add code
Nov 30, 2020
Figure 1 for Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect
Figure 2 for Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect
Figure 3 for Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect
Figure 4 for Invisible Perturbations: Physical Adversarial Examples Exploiting the Rolling Shutter Effect
Viaarxiv icon

Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification

Add code
Feb 17, 2020
Figure 1 for Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
Figure 2 for Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
Figure 3 for Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
Figure 4 for Query-Efficient Physical Hard-Label Attacks on Deep Learning Visual Classification
Viaarxiv icon

Analyzing the Interpretability Robustness of Self-Explaining Models

Add code
May 27, 2019
Figure 1 for Analyzing the Interpretability Robustness of Self-Explaining Models
Figure 2 for Analyzing the Interpretability Robustness of Self-Explaining Models
Figure 3 for Analyzing the Interpretability Robustness of Self-Explaining Models
Figure 4 for Analyzing the Interpretability Robustness of Self-Explaining Models
Viaarxiv icon

Physical Adversarial Examples for Object Detectors

Add code
Oct 05, 2018
Figure 1 for Physical Adversarial Examples for Object Detectors
Figure 2 for Physical Adversarial Examples for Object Detectors
Figure 3 for Physical Adversarial Examples for Object Detectors
Figure 4 for Physical Adversarial Examples for Object Detectors
Viaarxiv icon

Note on Attacking Object Detectors with Adversarial Stickers

Add code
Jul 23, 2018
Figure 1 for Note on Attacking Object Detectors with Adversarial Stickers
Figure 2 for Note on Attacking Object Detectors with Adversarial Stickers
Figure 3 for Note on Attacking Object Detectors with Adversarial Stickers
Viaarxiv icon