Picture for Donghai Hong

Donghai Hong

Libra-Leaderboard: Towards Responsible AI through a Balanced Leaderboard of Safety and Capability

Add code
Dec 24, 2024
Viaarxiv icon

Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback

Add code
Dec 20, 2024
Viaarxiv icon

PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models

Add code
Jun 20, 2024
Figure 1 for PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models
Figure 2 for PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models
Figure 3 for PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models
Figure 4 for PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models
Viaarxiv icon

Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction

Add code
Feb 06, 2024
Viaarxiv icon