Picture for Ihsen Alouani

Ihsen Alouani

Bypassing Prompt Injection Detectors through Evasive Injections

Add code
Jan 31, 2026
Viaarxiv icon

AttenMIA: LLM Membership Inference Attack through Attention Signals

Add code
Jan 26, 2026
Viaarxiv icon

Emerging Threats and Countermeasures in Neuromorphic Systems: A Survey

Add code
Jan 23, 2026
Viaarxiv icon

Attention Eclipse: Manipulating Attention to Bypass LLM Safety-Alignment

Add code
Feb 21, 2025
Viaarxiv icon

Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study

Add code
Nov 10, 2024
Figure 1 for Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study
Figure 2 for Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study
Figure 3 for Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study
Figure 4 for Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study
Viaarxiv icon

Model for Peanuts: Hijacking ML Models without Training Access is Possible

Add code
Jun 03, 2024
Figure 1 for Model for Peanuts: Hijacking ML Models without Training Access is Possible
Figure 2 for Model for Peanuts: Hijacking ML Models without Training Access is Possible
Figure 3 for Model for Peanuts: Hijacking ML Models without Training Access is Possible
Figure 4 for Model for Peanuts: Hijacking ML Models without Training Access is Possible
Viaarxiv icon

Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks

Add code
May 07, 2024
Figure 1 for Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks
Figure 2 for Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks
Figure 3 for Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks
Figure 4 for Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks
Viaarxiv icon

SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications

Add code
Mar 18, 2024
Figure 1 for SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications
Figure 2 for SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications
Figure 3 for SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications
Figure 4 for SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications
Viaarxiv icon

BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks

Add code
Feb 01, 2024
Figure 1 for BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks
Figure 2 for BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks
Figure 3 for BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks
Figure 4 for BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks
Viaarxiv icon

Evasive Hardware Trojan through Adversarial Power Trace

Add code
Jan 04, 2024
Viaarxiv icon