Picture for Binghai Wang

Binghai Wang

Outcome Accuracy is Not Enough: Aligning the Reasoning Process of Reward Models

Add code
Feb 04, 2026
Viaarxiv icon

AgentPRM: Process Reward Models for LLM Agents via Step-Wise Promise and Progress

Add code
Nov 11, 2025
Viaarxiv icon

WorldPM: Scaling Human Preference Modeling

Add code
May 15, 2025
Viaarxiv icon

RMB: Comprehensively Benchmarking Reward Models in LLM Alignment

Add code
Oct 13, 2024
Figure 1 for RMB: Comprehensively Benchmarking Reward Models in LLM Alignment
Figure 2 for RMB: Comprehensively Benchmarking Reward Models in LLM Alignment
Figure 3 for RMB: Comprehensively Benchmarking Reward Models in LLM Alignment
Figure 4 for RMB: Comprehensively Benchmarking Reward Models in LLM Alignment
Viaarxiv icon

Secrets of RLHF in Large Language Models Part II: Reward Modeling

Add code
Jan 12, 2024
Figure 1 for Secrets of RLHF in Large Language Models Part II: Reward Modeling
Figure 2 for Secrets of RLHF in Large Language Models Part II: Reward Modeling
Figure 3 for Secrets of RLHF in Large Language Models Part II: Reward Modeling
Figure 4 for Secrets of RLHF in Large Language Models Part II: Reward Modeling
Viaarxiv icon

Secrets of RLHF in Large Language Models Part I: PPO

Add code
Jul 18, 2023
Figure 1 for Secrets of RLHF in Large Language Models Part I: PPO
Figure 2 for Secrets of RLHF in Large Language Models Part I: PPO
Figure 3 for Secrets of RLHF in Large Language Models Part I: PPO
Figure 4 for Secrets of RLHF in Large Language Models Part I: PPO
Viaarxiv icon