Picture for Ateret Anaby-Tavor

Ateret Anaby-Tavor

On the Robustness of Agentic Function Calling

Add code
Apr 01, 2025
Viaarxiv icon

Breaking ReAct Agents: Foot-in-the-Door Attack Will Get You In

Add code
Oct 22, 2024
Figure 1 for Breaking ReAct Agents: Foot-in-the-Door Attack Will Get You In
Figure 2 for Breaking ReAct Agents: Foot-in-the-Door Attack Will Get You In
Figure 3 for Breaking ReAct Agents: Foot-in-the-Door Attack Will Get You In
Figure 4 for Breaking ReAct Agents: Foot-in-the-Door Attack Will Get You In
Viaarxiv icon

Exploring Straightforward Conversational Red-Teaming

Add code
Sep 07, 2024
Figure 1 for Exploring Straightforward Conversational Red-Teaming
Figure 2 for Exploring Straightforward Conversational Red-Teaming
Figure 3 for Exploring Straightforward Conversational Red-Teaming
Figure 4 for Exploring Straightforward Conversational Red-Teaming
Viaarxiv icon

A Novel Metric for Measuring the Robustness of Large Language Models in Non-adversarial Scenarios

Add code
Aug 04, 2024
Viaarxiv icon

From Zero to Hero: Cold-Start Anomaly Detection

Add code
May 30, 2024
Figure 1 for From Zero to Hero: Cold-Start Anomaly Detection
Figure 2 for From Zero to Hero: Cold-Start Anomaly Detection
Figure 3 for From Zero to Hero: Cold-Start Anomaly Detection
Figure 4 for From Zero to Hero: Cold-Start Anomaly Detection
Viaarxiv icon

Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations

Add code
Mar 09, 2024
Figure 1 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Figure 2 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Figure 3 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Figure 4 for Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations
Viaarxiv icon

SpeCrawler: Generating OpenAPI Specifications from API Documentation Using Large Language Models

Add code
Feb 18, 2024
Viaarxiv icon

What's the Plan? Evaluating and Developing Planning-Aware Techniques for LLMs

Add code
Feb 18, 2024
Viaarxiv icon

Unveiling Safety Vulnerabilities of Large Language Models

Add code
Nov 07, 2023
Figure 1 for Unveiling Safety Vulnerabilities of Large Language Models
Figure 2 for Unveiling Safety Vulnerabilities of Large Language Models
Figure 3 for Unveiling Safety Vulnerabilities of Large Language Models
Figure 4 for Unveiling Safety Vulnerabilities of Large Language Models
Viaarxiv icon

Predicting Question-Answering Performance of Large Language Models through Semantic Consistency

Add code
Nov 02, 2023
Viaarxiv icon