Picture for Jose Such

Jose Such

Privacy and Safety Experiences and Concerns of U.S. Women Using Generative AI for Seeking Sexual and Reproductive Health Information

Add code
Mar 10, 2026
Viaarxiv icon

Privacy in Human-AI Romantic Relationships: Concerns, Boundaries, and Agency

Add code
Jan 23, 2026
Viaarxiv icon

The Influence of Human-like Appearance on Expected Robot Explanations

Add code
Dec 12, 2025
Viaarxiv icon

Towards Safer Chatbots: A Framework for Policy Compliance Evaluation of Custom GPTs

Add code
Feb 03, 2025
Figure 1 for Towards Safer Chatbots: A Framework for Policy Compliance Evaluation of Custom GPTs
Figure 2 for Towards Safer Chatbots: A Framework for Policy Compliance Evaluation of Custom GPTs
Figure 3 for Towards Safer Chatbots: A Framework for Policy Compliance Evaluation of Custom GPTs
Figure 4 for Towards Safer Chatbots: A Framework for Policy Compliance Evaluation of Custom GPTs
Viaarxiv icon

CASE-Bench: Context-Aware Safety Evaluation Benchmark for Large Language Models

Add code
Jan 24, 2025
Figure 1 for CASE-Bench: Context-Aware Safety Evaluation Benchmark for Large Language Models
Figure 2 for CASE-Bench: Context-Aware Safety Evaluation Benchmark for Large Language Models
Figure 3 for CASE-Bench: Context-Aware Safety Evaluation Benchmark for Large Language Models
Figure 4 for CASE-Bench: Context-Aware Safety Evaluation Benchmark for Large Language Models
Viaarxiv icon

A Holistic Indicator of Polarization to Measure Online Sexism

Add code
Apr 02, 2024
Figure 1 for A Holistic Indicator of Polarization to Measure Online Sexism
Figure 2 for A Holistic Indicator of Polarization to Measure Online Sexism
Figure 3 for A Holistic Indicator of Polarization to Measure Online Sexism
Figure 4 for A Holistic Indicator of Polarization to Measure Online Sexism
Viaarxiv icon

Moral Uncertainty and the Problem of Fanaticism

Add code
Dec 18, 2023
Viaarxiv icon

AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics

Add code
Aug 28, 2023
Figure 1 for AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics
Figure 2 for AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics
Figure 3 for AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics
Figure 4 for AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in Controversial Topics
Viaarxiv icon

MalProtect: Stateful Defense Against Adversarial Query Attacks in ML-based Malware Detection

Add code
Feb 21, 2023
Viaarxiv icon

Effectiveness of Moving Target Defenses for Adversarial Attacks in ML-based Malware Detection

Add code
Feb 01, 2023
Viaarxiv icon