Picture for Josef Dai

Josef Dai

PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models

Add code
Jun 20, 2024
Figure 1 for PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models
Figure 2 for PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models
Figure 3 for PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models
Figure 4 for PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models
Viaarxiv icon

SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset

Add code
Jun 20, 2024
Viaarxiv icon

Rethinking Information Structures in RLHF: Reward Generalization from a Graph Theory Perspective

Add code
Feb 20, 2024
Viaarxiv icon

Safe RLHF: Safe Reinforcement Learning from Human Feedback

Add code
Oct 19, 2023
Figure 1 for Safe RLHF: Safe Reinforcement Learning from Human Feedback
Figure 2 for Safe RLHF: Safe Reinforcement Learning from Human Feedback
Figure 3 for Safe RLHF: Safe Reinforcement Learning from Human Feedback
Figure 4 for Safe RLHF: Safe Reinforcement Learning from Human Feedback
Viaarxiv icon