Picture for Dario Pasquini

Dario Pasquini

Hacking Back the AI-Hacker: Prompt Injection as a Defense Against LLM-driven Cyberattacks

Add code
Oct 28, 2024
Viaarxiv icon

LLMmap: Fingerprinting For Large Language Models

Add code
Jul 24, 2024
Viaarxiv icon

Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks

Add code
Mar 06, 2024
Figure 1 for Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Figure 2 for Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Figure 3 for Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Figure 4 for Neural Exec: Learning (and Learning from) Execution Triggers for Prompt Injection Attacks
Viaarxiv icon

Can Decentralized Learning be more robust than Federated Learning?

Add code
Mar 07, 2023
Viaarxiv icon

Universal Neural-Cracking-Machines: Self-Configurable Password Models from Auxiliary Data

Add code
Jan 18, 2023
Viaarxiv icon

On the Privacy of Decentralized Machine Learning

Add code
May 17, 2022
Figure 1 for On the Privacy of Decentralized Machine Learning
Figure 2 for On the Privacy of Decentralized Machine Learning
Figure 3 for On the Privacy of Decentralized Machine Learning
Figure 4 for On the Privacy of Decentralized Machine Learning
Viaarxiv icon

Eluding Secure Aggregation in Federated Learning via Model Inconsistency

Add code
Nov 14, 2021
Figure 1 for Eluding Secure Aggregation in Federated Learning via Model Inconsistency
Figure 2 for Eluding Secure Aggregation in Federated Learning via Model Inconsistency
Figure 3 for Eluding Secure Aggregation in Federated Learning via Model Inconsistency
Figure 4 for Eluding Secure Aggregation in Federated Learning via Model Inconsistency
Viaarxiv icon

Unleashing the Tiger: Inference Attacks on Split Learning

Add code
Dec 04, 2020
Figure 1 for Unleashing the Tiger: Inference Attacks on Split Learning
Figure 2 for Unleashing the Tiger: Inference Attacks on Split Learning
Figure 3 for Unleashing the Tiger: Inference Attacks on Split Learning
Figure 4 for Unleashing the Tiger: Inference Attacks on Split Learning
Viaarxiv icon

Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries

Add code
Oct 26, 2020
Figure 1 for Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries
Figure 2 for Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries
Figure 3 for Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries
Figure 4 for Reducing Bias in Modeling Real-world Password Strength via Deep Learning and Dynamic Dictionaries
Viaarxiv icon

Interpretable Probabilistic Password Strength Meters via Deep Learning

Add code
Apr 29, 2020
Figure 1 for Interpretable Probabilistic Password Strength Meters via Deep Learning
Figure 2 for Interpretable Probabilistic Password Strength Meters via Deep Learning
Figure 3 for Interpretable Probabilistic Password Strength Meters via Deep Learning
Figure 4 for Interpretable Probabilistic Password Strength Meters via Deep Learning
Viaarxiv icon