Picture for Kaleel Mahmood

Kaleel Mahmood

Certifying Adapters: Enabling and Enhancing the Certification of Classifier Adversarial Robustness

Add code
May 25, 2024
Viaarxiv icon

Distilling Adversarial Robustness Using Heterogeneous Teachers

Add code
Feb 23, 2024
Figure 1 for Distilling Adversarial Robustness Using Heterogeneous Teachers
Figure 2 for Distilling Adversarial Robustness Using Heterogeneous Teachers
Figure 3 for Distilling Adversarial Robustness Using Heterogeneous Teachers
Figure 4 for Distilling Adversarial Robustness Using Heterogeneous Teachers
Viaarxiv icon

AutoReP: Automatic ReLU Replacement for Fast Private Network Inference

Add code
Aug 20, 2023
Viaarxiv icon

Dynamic Gradient Balancing for Enhanced Adversarial Attacks on Multi-Task Models

Add code
May 20, 2023
Viaarxiv icon

Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration

Add code
Apr 24, 2023
Viaarxiv icon

Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning

Add code
Nov 26, 2022
Viaarxiv icon

Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models

Add code
Sep 22, 2022
Figure 1 for Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Figure 2 for Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Figure 3 for Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Figure 4 for Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Viaarxiv icon

Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples

Add code
Sep 07, 2022
Figure 1 for Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples
Figure 2 for Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples
Figure 3 for Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples
Figure 4 for Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples
Viaarxiv icon

Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks

Add code
Sep 29, 2021
Figure 1 for Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks
Figure 2 for Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks
Figure 3 for Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks
Figure 4 for Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks
Viaarxiv icon

On the Robustness of Vision Transformers to Adversarial Examples

Add code
Mar 31, 2021
Figure 1 for On the Robustness of Vision Transformers to Adversarial Examples
Figure 2 for On the Robustness of Vision Transformers to Adversarial Examples
Figure 3 for On the Robustness of Vision Transformers to Adversarial Examples
Figure 4 for On the Robustness of Vision Transformers to Adversarial Examples
Viaarxiv icon