Picture for Nael Abu-Ghazaleh

Nael Abu-Ghazaleh

Unfair Alignment: Examining Safety Alignment Across Vision Encoder Layers in Vision-Language Models

Add code
Nov 06, 2024
Viaarxiv icon

Cross-Modal Safety Alignment: Is textual unlearning all you need?

Add code
May 27, 2024
Viaarxiv icon

May the Noise be with you: Adversarial Training without Adversarial Examples

Add code
Dec 12, 2023
Viaarxiv icon

Fool the Hydra: Adversarial Attacks against Multi-view Object Detection Systems

Add code
Nov 30, 2023
Viaarxiv icon

Attention Deficit is Ordered! Fooling Deformable Vision Transformers with Collaborative Adversarial Patches

Add code
Nov 21, 2023
Viaarxiv icon

Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks

Add code
Oct 16, 2023
Viaarxiv icon

Plug and Pray: Exploiting off-the-shelf components of Multi-Modal Models

Add code
Jul 26, 2023
Viaarxiv icon

Learn to Compress (LtC): Efficient Learning-based Streaming Video Analytics

Add code
Jul 25, 2023
Viaarxiv icon

DeepMem: ML Models as storage channels and their applications

Add code
Jul 24, 2023
Viaarxiv icon

Jedi: Entropy-based Localization and Removal of Adversarial Patches

Add code
Apr 20, 2023
Viaarxiv icon