Picture for Ihsen Alouani

Ihsen Alouani

Are Neuromorphic Architectures Inherently Privacy-preserving? An Exploratory Study

Add code
Nov 10, 2024
Viaarxiv icon

Model for Peanuts: Hijacking ML Models without Training Access is Possible

Add code
Jun 03, 2024
Viaarxiv icon

Watermarking Neuromorphic Brains: Intellectual Property Protection in Spiking Neural Networks

Add code
May 07, 2024
Viaarxiv icon

SSAP: A Shape-Sensitive Adversarial Patch for Comprehensive Disruption of Monocular Depth Estimation in Autonomous Navigation Applications

Add code
Mar 18, 2024
Viaarxiv icon

BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks

Add code
Feb 01, 2024
Viaarxiv icon

Evasive Hardware Trojan through Adversarial Power Trace

Add code
Jan 04, 2024
Viaarxiv icon

May the Noise be with you: Adversarial Training without Adversarial Examples

Add code
Dec 12, 2023
Figure 1 for May the Noise be with you: Adversarial Training without Adversarial Examples
Figure 2 for May the Noise be with you: Adversarial Training without Adversarial Examples
Figure 3 for May the Noise be with you: Adversarial Training without Adversarial Examples
Figure 4 for May the Noise be with you: Adversarial Training without Adversarial Examples
Viaarxiv icon

Fool the Hydra: Adversarial Attacks against Multi-view Object Detection Systems

Add code
Nov 30, 2023
Viaarxiv icon

Attention Deficit is Ordered! Fooling Deformable Vision Transformers with Collaborative Adversarial Patches

Add code
Nov 21, 2023
Viaarxiv icon

DeepMem: ML Models as storage channels and their applications

Add code
Jul 24, 2023
Figure 1 for DeepMem: ML Models as storage channels and their applications
Figure 2 for DeepMem: ML Models as storage channels and their applications
Figure 3 for DeepMem: ML Models as storage channels and their applications
Figure 4 for DeepMem: ML Models as storage channels and their applications
Viaarxiv icon