Picture for Kaleel Mahmood

Kaleel Mahmood

Enhanced Computationally Efficient Long LoRA Inspired Perceiver Architectures for Auto-Regressive Language Modeling

Add code
Dec 08, 2024
Viaarxiv icon

Theoretical Corrections and the Leveraging of Reinforcement Learning to Enhance Triangle Attack

Add code
Nov 18, 2024
Viaarxiv icon

Certifying Adapters: Enabling and Enhancing the Certification of Classifier Adversarial Robustness

Add code
May 25, 2024
Viaarxiv icon

Distilling Adversarial Robustness Using Heterogeneous Teachers

Add code
Feb 23, 2024
Figure 1 for Distilling Adversarial Robustness Using Heterogeneous Teachers
Figure 2 for Distilling Adversarial Robustness Using Heterogeneous Teachers
Figure 3 for Distilling Adversarial Robustness Using Heterogeneous Teachers
Figure 4 for Distilling Adversarial Robustness Using Heterogeneous Teachers
Viaarxiv icon

AutoReP: Automatic ReLU Replacement for Fast Private Network Inference

Add code
Aug 20, 2023
Viaarxiv icon

Dynamic Gradient Balancing for Enhanced Adversarial Attacks on Multi-Task Models

Add code
May 20, 2023
Viaarxiv icon

Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration

Add code
Apr 24, 2023
Viaarxiv icon

Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning

Add code
Nov 26, 2022
Viaarxiv icon

Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models

Add code
Sep 22, 2022
Figure 1 for Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Figure 2 for Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Figure 3 for Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Figure 4 for Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Viaarxiv icon

Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples

Add code
Sep 07, 2022
Figure 1 for Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples
Figure 2 for Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples
Figure 3 for Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples
Figure 4 for Securing the Spike: On the Transferabilty and Security of Spiking Neural Networks to Adversarial Examples
Viaarxiv icon