Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization

Add code
Jul 10, 2024
Figure 1 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Figure 2 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Figure 3 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Figure 4 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: