Picture for Chenkun Tan

Chenkun Tan

LongSafetyBench: Long-Context LLMs Struggle with Safety Issues

Add code
Nov 11, 2024
Viaarxiv icon

MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time

Add code
Oct 18, 2024
Figure 1 for MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time
Figure 2 for MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time
Figure 3 for MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time
Figure 4 for MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time
Viaarxiv icon

InferAligner: Inference-Time Alignment for Harmlessness through Cross-Model Guidance

Add code
Jan 20, 2024
Viaarxiv icon