Picture for Feiran Jia

Feiran Jia

The Task Shield: Enforcing Task Alignment to Defend Against Indirect Prompt Injection in LLM Agents

Add code
Dec 21, 2024
Viaarxiv icon

Can Large Language Model Agents Simulate Human Trust Behaviors?

Add code
Feb 07, 2024
Figure 1 for Can Large Language Model Agents Simulate Human Trust Behaviors?
Figure 2 for Can Large Language Model Agents Simulate Human Trust Behaviors?
Figure 3 for Can Large Language Model Agents Simulate Human Trust Behaviors?
Figure 4 for Can Large Language Model Agents Simulate Human Trust Behaviors?
Viaarxiv icon

An Empirical Study on Challenging Math Problem Solving with GPT-4

Add code
Jun 08, 2023
Viaarxiv icon

Uncovering Adversarial Risks of Test-Time Adaptation

Add code
Feb 04, 2023
Figure 1 for Uncovering Adversarial Risks of Test-Time Adaptation
Figure 2 for Uncovering Adversarial Risks of Test-Time Adaptation
Figure 3 for Uncovering Adversarial Risks of Test-Time Adaptation
Figure 4 for Uncovering Adversarial Risks of Test-Time Adaptation
Viaarxiv icon

RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model

Add code
Jun 01, 2022
Figure 1 for RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model
Figure 2 for RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model
Figure 3 for RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model
Figure 4 for RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model
Viaarxiv icon