Picture for Ron Bitton

Ron Bitton

Unleashing Worms and Extracting Data: Escalating the Outcome of Attacks against RAG-based Inference in Scale and Severity Using Jailbreaking

Add code
Sep 12, 2024
Viaarxiv icon

A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered Applications are Vulnerable to PromptWares

Add code
Aug 09, 2024
Figure 1 for A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered Applications are Vulnerable to PromptWares
Figure 2 for A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered Applications are Vulnerable to PromptWares
Figure 3 for A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered Applications are Vulnerable to PromptWares
Figure 4 for A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered Applications are Vulnerable to PromptWares
Viaarxiv icon

The Adversarial Implications of Variable-Time Inference

Add code
Sep 05, 2023
Figure 1 for The Adversarial Implications of Variable-Time Inference
Figure 2 for The Adversarial Implications of Variable-Time Inference
Figure 3 for The Adversarial Implications of Variable-Time Inference
Figure 4 for The Adversarial Implications of Variable-Time Inference
Viaarxiv icon

Latent SHAP: Toward Practical Human-Interpretable Explanations

Add code
Nov 27, 2022
Figure 1 for Latent SHAP: Toward Practical Human-Interpretable Explanations
Figure 2 for Latent SHAP: Toward Practical Human-Interpretable Explanations
Figure 3 for Latent SHAP: Toward Practical Human-Interpretable Explanations
Figure 4 for Latent SHAP: Toward Practical Human-Interpretable Explanations
Viaarxiv icon

Attacking Object Detector Using A Universal Targeted Label-Switch Patch

Add code
Nov 16, 2022
Viaarxiv icon

Improving Interpretability via Regularization of Neural Activation Sensitivity

Add code
Nov 16, 2022
Viaarxiv icon

Adversarial Machine Learning Threat Analysis in Open Radio Access Networks

Add code
Jan 16, 2022
Viaarxiv icon

A Framework for Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems

Add code
Jul 05, 2021
Figure 1 for A Framework for Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems
Figure 2 for A Framework for Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems
Figure 3 for A Framework for Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems
Figure 4 for A Framework for Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems
Viaarxiv icon

Adversarial robustness via stochastic regularization of neural activation sensitivity

Add code
Sep 23, 2020
Figure 1 for Adversarial robustness via stochastic regularization of neural activation sensitivity
Figure 2 for Adversarial robustness via stochastic regularization of neural activation sensitivity
Figure 3 for Adversarial robustness via stochastic regularization of neural activation sensitivity
Figure 4 for Adversarial robustness via stochastic regularization of neural activation sensitivity
Viaarxiv icon

An Automated, End-to-End Framework for Modeling Attacks From Vulnerability Descriptions

Add code
Aug 10, 2020
Figure 1 for An Automated, End-to-End Framework for Modeling Attacks From Vulnerability Descriptions
Figure 2 for An Automated, End-to-End Framework for Modeling Attacks From Vulnerability Descriptions
Figure 3 for An Automated, End-to-End Framework for Modeling Attacks From Vulnerability Descriptions
Figure 4 for An Automated, End-to-End Framework for Modeling Attacks From Vulnerability Descriptions
Viaarxiv icon