Picture for Mickel Liu

Mickel Liu

Safe RLHF: Safe Reinforcement Learning from Human Feedback

Add code
Oct 19, 2023
Figure 1 for Safe RLHF: Safe Reinforcement Learning from Human Feedback
Figure 2 for Safe RLHF: Safe Reinforcement Learning from Human Feedback
Figure 3 for Safe RLHF: Safe Reinforcement Learning from Human Feedback
Figure 4 for Safe RLHF: Safe Reinforcement Learning from Human Feedback
Viaarxiv icon

Baichuan 2: Open Large-scale Language Models

Add code
Sep 20, 2023
Viaarxiv icon

BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset

Add code
Jul 10, 2023
Viaarxiv icon

OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research

Add code
May 16, 2023
Viaarxiv icon

Proactive Multi-Camera Collaboration For 3D Human Pose Estimation

Add code
Mar 07, 2023
Viaarxiv icon