Picture for Yi Dong

Yi Dong

Diverging Preferences: When do Annotators Disagree and do Models Know?

Add code
Oct 18, 2024
Figure 1 for Diverging Preferences: When do Annotators Disagree and do Models Know?
Figure 2 for Diverging Preferences: When do Annotators Disagree and do Models Know?
Figure 3 for Diverging Preferences: When do Annotators Disagree and do Models Know?
Figure 4 for Diverging Preferences: When do Annotators Disagree and do Models Know?
Viaarxiv icon

HelpSteer2-Preference: Complementing Ratings with Preferences

Add code
Oct 02, 2024
Viaarxiv icon

Adaptive Guardrails For Large Language Models via Trust Modeling and In-Context Learning

Add code
Aug 16, 2024
Viaarxiv icon

Automatically Generating UI Code from Screenshot: A Divide-and-Conquer-Based Approach

Add code
Jun 24, 2024
Viaarxiv icon

Nemotron-4 340B Technical Report

Add code
Jun 17, 2024
Figure 1 for Nemotron-4 340B Technical Report
Figure 2 for Nemotron-4 340B Technical Report
Figure 3 for Nemotron-4 340B Technical Report
Figure 4 for Nemotron-4 340B Technical Report
Viaarxiv icon

HelpSteer2: Open-source dataset for training top-performing reward models

Add code
Jun 12, 2024
Viaarxiv icon

Safeguarding Large Language Models: A Survey

Add code
Jun 03, 2024
Viaarxiv icon

I2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion Models

Add code
May 26, 2024
Viaarxiv icon

NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment

Add code
May 02, 2024
Figure 1 for NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment
Figure 2 for NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment
Figure 3 for NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment
Figure 4 for NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment
Viaarxiv icon

Building Guardrails for Large Language Models

Add code
Feb 02, 2024
Viaarxiv icon