Picture for Canyu Chen

Canyu Chen

Can Knowledge Editing Really Correct Hallucinations?

Add code
Oct 21, 2024
Viaarxiv icon

FMBench: Benchmarking Fairness in Multimodal Large Language Models on Medical Tasks

Add code
Oct 01, 2024
Viaarxiv icon

Model Attribution in Machine-Generated Disinformation: A Domain Generalization Approach with Supervised Contrastive Learning

Add code
Jul 31, 2024
Viaarxiv icon

Can Editing LLMs Inject Harm?

Add code
Jul 29, 2024
Viaarxiv icon

MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?

Add code
Jul 05, 2024
Viaarxiv icon

Introducing v0.5 of the AI Safety Benchmark from MLCommons

Add code
Apr 18, 2024
Viaarxiv icon

Can Large Language Models Identify Authorship?

Add code
Mar 13, 2024
Figure 1 for Can Large Language Models Identify Authorship?
Figure 2 for Can Large Language Models Identify Authorship?
Figure 3 for Can Large Language Models Identify Authorship?
Figure 4 for Can Large Language Models Identify Authorship?
Viaarxiv icon

Can Large Language Model Agents Simulate Human Trust Behaviors?

Add code
Feb 07, 2024
Viaarxiv icon

Can LLM-Generated Misinformation Be Detected?

Add code
Sep 25, 2023
Viaarxiv icon

MetaGAD: Learning to Meta Transfer for Few-shot Graph Anomaly Detection

Add code
May 18, 2023
Viaarxiv icon