Picture for Tianyi Qiu

Tianyi Qiu

Representative Social Choice: From Learning Theory to AI Alignment

Add code
Oct 31, 2024
Viaarxiv icon

ProgressGym: Alignment with a Millennium of Moral Progress

Add code
Jun 28, 2024
Figure 1 for ProgressGym: Alignment with a Millennium of Moral Progress
Figure 2 for ProgressGym: Alignment with a Millennium of Moral Progress
Figure 3 for ProgressGym: Alignment with a Millennium of Moral Progress
Figure 4 for ProgressGym: Alignment with a Millennium of Moral Progress
Viaarxiv icon

PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models

Add code
Jun 20, 2024
Viaarxiv icon

Language Models Resist Alignment

Add code
Jun 10, 2024
Figure 1 for Language Models Resist Alignment
Figure 2 for Language Models Resist Alignment
Figure 3 for Language Models Resist Alignment
Figure 4 for Language Models Resist Alignment
Viaarxiv icon

Rethinking Information Structures in RLHF: Reward Generalization from a Graph Theory Perspective

Add code
Feb 20, 2024
Viaarxiv icon

AI Alignment: A Comprehensive Survey

Add code
Nov 01, 2023
Viaarxiv icon