Picture for Jiaming Ji

Jiaming Ji

RedStar: Does Scaling Long-CoT Data Unlock Better Slow-Reasoning Systems?

Add code
Jan 20, 2025
Viaarxiv icon

Stream Aligner: Efficient Sentence-Level Alignment via Distribution Induction

Add code
Jan 09, 2025
Viaarxiv icon

Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback

Add code
Dec 20, 2024
Figure 1 for Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
Figure 2 for Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
Figure 3 for Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
Figure 4 for Align Anything: Training All-Modality Models to Follow Instructions with Language Feedback
Viaarxiv icon

Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback

Add code
Aug 30, 2024
Viaarxiv icon

ProgressGym: Alignment with a Millennium of Moral Progress

Add code
Jun 28, 2024
Figure 1 for ProgressGym: Alignment with a Millennium of Moral Progress
Figure 2 for ProgressGym: Alignment with a Millennium of Moral Progress
Figure 3 for ProgressGym: Alignment with a Millennium of Moral Progress
Figure 4 for ProgressGym: Alignment with a Millennium of Moral Progress
Viaarxiv icon

SafeSora: Towards Safety Alignment of Text2Video Generation via a Human Preference Dataset

Add code
Jun 20, 2024
Viaarxiv icon

PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models

Add code
Jun 20, 2024
Figure 1 for PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models
Figure 2 for PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models
Figure 3 for PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models
Figure 4 for PKU-SafeRLHF: A Safety Alignment Preference Dataset for Llama Family Models
Viaarxiv icon

Language Models Resist Alignment

Add code
Jun 10, 2024
Figure 1 for Language Models Resist Alignment
Figure 2 for Language Models Resist Alignment
Figure 3 for Language Models Resist Alignment
Figure 4 for Language Models Resist Alignment
Viaarxiv icon

Rethinking Information Structures in RLHF: Reward Generalization from a Graph Theory Perspective

Add code
Feb 20, 2024
Viaarxiv icon

Aligner: Achieving Efficient Alignment through Weak-to-Strong Correction

Add code
Feb 06, 2024
Viaarxiv icon