Picture for Lauro Langosco

Lauro Langosco

Foundational Challenges in Assuring Alignment and Safety of Large Language Models

Add code
Apr 15, 2024
Viaarxiv icon

Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

Add code
Jul 27, 2023
Figure 1 for Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Figure 2 for Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Figure 3 for Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Figure 4 for Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Viaarxiv icon

Unifying Grokking and Double Descent

Add code
Mar 10, 2023
Viaarxiv icon

Objective Robustness in Deep Reinforcement Learning

Add code
Jun 08, 2021
Figure 1 for Objective Robustness in Deep Reinforcement Learning
Figure 2 for Objective Robustness in Deep Reinforcement Learning
Figure 3 for Objective Robustness in Deep Reinforcement Learning
Figure 4 for Objective Robustness in Deep Reinforcement Learning
Viaarxiv icon