Picture for Mianqiu Huang

Mianqiu Huang

LongSafetyBench: Long-Context LLMs Struggle with Safety Issues

Add code
Nov 11, 2024
Viaarxiv icon

MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time

Add code
Oct 18, 2024
Figure 1 for MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time
Figure 2 for MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time
Figure 3 for MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time
Figure 4 for MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time
Viaarxiv icon

Calibrating the Confidence of Large Language Models by Eliciting Fidelity

Add code
Apr 03, 2024
Viaarxiv icon

Evaluating Hallucinations in Chinese Large Language Models

Add code
Oct 05, 2023
Viaarxiv icon