Picture for Issa Khalil

Issa Khalil

Demo: SGCode: A Flexible Prompt-Optimizing System for Secure Generation of Code

Add code
Sep 11, 2024
Figure 1 for Demo: SGCode: A Flexible Prompt-Optimizing System for Secure Generation of Code
Figure 2 for Demo: SGCode: A Flexible Prompt-Optimizing System for Secure Generation of Code
Figure 3 for Demo: SGCode: A Flexible Prompt-Optimizing System for Secure Generation of Code
Viaarxiv icon

Explainable AI-based Intrusion Detection System for Industry 5.0: An Overview of the Literature, associated Challenges, the existing Solutions, and Potential Research Directions

Add code
Jul 21, 2024
Figure 1 for Explainable AI-based Intrusion Detection System for Industry 5.0: An Overview of the Literature, associated Challenges, the existing Solutions, and Potential Research Directions
Figure 2 for Explainable AI-based Intrusion Detection System for Industry 5.0: An Overview of the Literature, associated Challenges, the existing Solutions, and Potential Research Directions
Figure 3 for Explainable AI-based Intrusion Detection System for Industry 5.0: An Overview of the Literature, associated Challenges, the existing Solutions, and Potential Research Directions
Figure 4 for Explainable AI-based Intrusion Detection System for Industry 5.0: An Overview of the Literature, associated Challenges, the existing Solutions, and Potential Research Directions
Viaarxiv icon

Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection

Add code
Aug 22, 2023
Figure 1 for Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection
Figure 2 for Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection
Figure 3 for Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection
Figure 4 for Multi-Instance Adversarial Attack on GNN-Based Malicious Domain Detection
Viaarxiv icon

FairDP: Certified Fairness with Differential Privacy

Add code
May 25, 2023
Figure 1 for FairDP: Certified Fairness with Differential Privacy
Figure 2 for FairDP: Certified Fairness with Differential Privacy
Figure 3 for FairDP: Certified Fairness with Differential Privacy
Figure 4 for FairDP: Certified Fairness with Differential Privacy
Viaarxiv icon

Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks

Add code
Nov 10, 2022
Figure 1 for Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks
Figure 2 for Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks
Figure 3 for Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks
Figure 4 for Heterogeneous Randomized Response for Differential Privacy in Graph Neural Networks
Viaarxiv icon

Ten Years after ImageNet: A 360° Perspective on AI

Add code
Oct 01, 2022
Figure 1 for Ten Years after ImageNet: A 360° Perspective on AI
Figure 2 for Ten Years after ImageNet: A 360° Perspective on AI
Figure 3 for Ten Years after ImageNet: A 360° Perspective on AI
Figure 4 for Ten Years after ImageNet: A 360° Perspective on AI
Viaarxiv icon

An Adaptive Black-box Defense against Trojan Attacks

Add code
Sep 05, 2022
Figure 1 for An Adaptive Black-box Defense against Trojan Attacks
Figure 2 for An Adaptive Black-box Defense against Trojan Attacks
Figure 3 for An Adaptive Black-box Defense against Trojan Attacks
Figure 4 for An Adaptive Black-box Defense against Trojan Attacks
Viaarxiv icon

Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning

Add code
Jan 19, 2022
Figure 1 for Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning
Figure 2 for Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning
Figure 3 for Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning
Figure 4 for Model Transferring Attacks to Backdoor HyperNetwork in Personalized Federated Learning
Viaarxiv icon

A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples

Add code
Sep 03, 2021
Figure 1 for A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
Figure 2 for A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
Figure 3 for A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
Figure 4 for A Synergetic Attack against Neural Network Classifiers combining Backdoor and Adversarial Examples
Viaarxiv icon

Time-Window Group-Correlation Support vs. Individual Features: A Detection of Abnormal Users

Add code
Dec 27, 2020
Figure 1 for Time-Window Group-Correlation Support vs. Individual Features: A Detection of Abnormal Users
Figure 2 for Time-Window Group-Correlation Support vs. Individual Features: A Detection of Abnormal Users
Figure 3 for Time-Window Group-Correlation Support vs. Individual Features: A Detection of Abnormal Users
Figure 4 for Time-Window Group-Correlation Support vs. Individual Features: A Detection of Abnormal Users
Viaarxiv icon