Picture for Xuhui Zhou

Xuhui Zhou

AutoPresent: Designing Structured Visuals from Scratch

Add code
Jan 01, 2025
Viaarxiv icon

Bridging the Data Provenance Gap Across Text, Speech and Video

Add code
Dec 19, 2024
Figure 1 for Bridging the Data Provenance Gap Across Text, Speech and Video
Figure 2 for Bridging the Data Provenance Gap Across Text, Speech and Video
Figure 3 for Bridging the Data Provenance Gap Across Text, Speech and Video
Figure 4 for Bridging the Data Provenance Gap Across Text, Speech and Video
Viaarxiv icon

TheAgentCompany: Benchmarking LLM Agents on Consequential Real World Tasks

Add code
Dec 18, 2024
Viaarxiv icon

Minion: A Technology Probe for Resolving Value Conflicts through Expert-Driven and User-Driven Strategies in AI Companion Applications

Add code
Nov 11, 2024
Figure 1 for Minion: A Technology Probe for Resolving Value Conflicts through Expert-Driven and User-Driven Strategies in AI Companion Applications
Figure 2 for Minion: A Technology Probe for Resolving Value Conflicts through Expert-Driven and User-Driven Strategies in AI Companion Applications
Figure 3 for Minion: A Technology Probe for Resolving Value Conflicts through Expert-Driven and User-Driven Strategies in AI Companion Applications
Figure 4 for Minion: A Technology Probe for Resolving Value Conflicts through Expert-Driven and User-Driven Strategies in AI Companion Applications
Viaarxiv icon

BIG5-CHAT: Shaping LLM Personalities Through Training on Human-Grounded Data

Add code
Oct 21, 2024
Figure 1 for BIG5-CHAT: Shaping LLM Personalities Through Training on Human-Grounded Data
Figure 2 for BIG5-CHAT: Shaping LLM Personalities Through Training on Human-Grounded Data
Figure 3 for BIG5-CHAT: Shaping LLM Personalities Through Training on Human-Grounded Data
Figure 4 for BIG5-CHAT: Shaping LLM Personalities Through Training on Human-Grounded Data
Viaarxiv icon

HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions

Add code
Sep 26, 2024
Figure 1 for HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions
Figure 2 for HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions
Figure 3 for HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions
Figure 4 for HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions
Viaarxiv icon

AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents

Add code
Sep 13, 2024
Figure 1 for AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents
Figure 2 for AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents
Figure 3 for AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents
Figure 4 for AI-LieDar: Examine the Trade-off Between Utility and Truthfulness in LLM Agents
Viaarxiv icon

On the Resilience of Multi-Agent Systems with Malicious Agents

Add code
Aug 02, 2024
Viaarxiv icon

Consent in Crisis: The Rapid Decline of the AI Data Commons

Add code
Jul 24, 2024
Figure 1 for Consent in Crisis: The Rapid Decline of the AI Data Commons
Figure 2 for Consent in Crisis: The Rapid Decline of the AI Data Commons
Figure 3 for Consent in Crisis: The Rapid Decline of the AI Data Commons
Figure 4 for Consent in Crisis: The Rapid Decline of the AI Data Commons
Viaarxiv icon

PolygloToxicityPrompts: Multilingual Evaluation of Neural Toxic Degeneration in Large Language Models

Add code
May 15, 2024
Viaarxiv icon