Picture for Alekh Agarwal

Alekh Agarwal

Preserving Expert-Level Privacy in Offline Reinforcement Learning

Add code
Nov 18, 2024
Viaarxiv icon

Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning

Add code
Oct 10, 2024
Figure 1 for Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning
Figure 2 for Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning
Figure 3 for Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning
Figure 4 for Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning
Viaarxiv icon

Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning

Add code
Jul 22, 2024
Figure 1 for Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning
Figure 2 for Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning
Figure 3 for Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning
Figure 4 for Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning
Viaarxiv icon

Robust Preference Optimization through Reward Model Distillation

Add code
May 29, 2024
Viaarxiv icon

Offline Imitation Learning from Multiple Baselines with Applications to Compiler Optimization

Add code
Mar 28, 2024
Figure 1 for Offline Imitation Learning from Multiple Baselines with Applications to Compiler Optimization
Figure 2 for Offline Imitation Learning from Multiple Baselines with Applications to Compiler Optimization
Figure 3 for Offline Imitation Learning from Multiple Baselines with Applications to Compiler Optimization
Figure 4 for Offline Imitation Learning from Multiple Baselines with Applications to Compiler Optimization
Viaarxiv icon

Stochastic Gradient Succeeds for Bandits

Add code
Feb 27, 2024
Figure 1 for Stochastic Gradient Succeeds for Bandits
Figure 2 for Stochastic Gradient Succeeds for Bandits
Figure 3 for Stochastic Gradient Succeeds for Bandits
Figure 4 for Stochastic Gradient Succeeds for Bandits
Viaarxiv icon

More Benefits of Being Distributional: Second-Order Bounds for Reinforcement Learning

Add code
Feb 11, 2024
Viaarxiv icon

A Minimaximalist Approach to Reinforcement Learning from Human Feedback

Add code
Jan 08, 2024
Viaarxiv icon

Theoretical guarantees on the best-of-n alignment policy

Add code
Jan 03, 2024
Figure 1 for Theoretical guarantees on the best-of-n alignment policy
Figure 2 for Theoretical guarantees on the best-of-n alignment policy
Figure 3 for Theoretical guarantees on the best-of-n alignment policy
Viaarxiv icon

Helping or Herding? Reward Model Ensembles Mitigate but do not Eliminate Reward Hacking

Add code
Dec 21, 2023
Figure 1 for Helping or Herding? Reward Model Ensembles Mitigate but do not Eliminate Reward Hacking
Figure 2 for Helping or Herding? Reward Model Ensembles Mitigate but do not Eliminate Reward Hacking
Figure 3 for Helping or Herding? Reward Model Ensembles Mitigate but do not Eliminate Reward Hacking
Figure 4 for Helping or Herding? Reward Model Ensembles Mitigate but do not Eliminate Reward Hacking
Viaarxiv icon