Picture for Juntao Dai

Juntao Dai

Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback

Add code
Aug 30, 2024
Viaarxiv icon

Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction

Add code
Feb 06, 2024
Viaarxiv icon

AI Alignment: A Comprehensive Survey

Add code
Nov 01, 2023
Viaarxiv icon

Safety-Gymnasium: A Unified Safe Reinforcement Learning Benchmark

Add code
Oct 19, 2023
Viaarxiv icon

BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset

Add code
Jul 10, 2023
Viaarxiv icon

OmniSafe: An Infrastructure for Accelerating Safe Reinforcement Learning Research

Add code
May 16, 2023
Viaarxiv icon

Constrained Update Projection Approach to Safe Policy Optimization

Add code
Sep 15, 2022
Figure 1 for Constrained Update Projection Approach to Safe Policy Optimization
Figure 2 for Constrained Update Projection Approach to Safe Policy Optimization
Figure 3 for Constrained Update Projection Approach to Safe Policy Optimization
Figure 4 for Constrained Update Projection Approach to Safe Policy Optimization
Viaarxiv icon

CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning

Add code
Feb 15, 2022
Figure 1 for CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning
Figure 2 for CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning
Figure 3 for CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning
Figure 4 for CUP: A Conservative Update Policy Algorithm for Safe Reinforcement Learning
Viaarxiv icon