RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback

Add code
Dec 01, 2023

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: