Picture for Mianqiu Huang

Mianqiu Huang

Thus Spake Long-Context Large Language Model

Add code
Feb 24, 2025
Viaarxiv icon

LongSafetyBench: Long-Context LLMs Struggle with Safety Issues

Add code
Nov 11, 2024
Figure 1 for LongSafetyBench: Long-Context LLMs Struggle with Safety Issues
Figure 2 for LongSafetyBench: Long-Context LLMs Struggle with Safety Issues
Figure 3 for LongSafetyBench: Long-Context LLMs Struggle with Safety Issues
Figure 4 for LongSafetyBench: Long-Context LLMs Struggle with Safety Issues
Viaarxiv icon

MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time

Add code
Oct 18, 2024
Figure 1 for MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time
Figure 2 for MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time
Figure 3 for MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time
Figure 4 for MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time
Viaarxiv icon

Calibrating the Confidence of Large Language Models by Eliciting Fidelity

Add code
Apr 03, 2024
Figure 1 for Calibrating the Confidence of Large Language Models by Eliciting Fidelity
Figure 2 for Calibrating the Confidence of Large Language Models by Eliciting Fidelity
Figure 3 for Calibrating the Confidence of Large Language Models by Eliciting Fidelity
Figure 4 for Calibrating the Confidence of Large Language Models by Eliciting Fidelity
Viaarxiv icon

Evaluating Hallucinations in Chinese Large Language Models

Add code
Oct 05, 2023
Figure 1 for Evaluating Hallucinations in Chinese Large Language Models
Figure 2 for Evaluating Hallucinations in Chinese Large Language Models
Figure 3 for Evaluating Hallucinations in Chinese Large Language Models
Figure 4 for Evaluating Hallucinations in Chinese Large Language Models
Viaarxiv icon