Picture for Yi Dong

Yi Dong

Predicting Large Language Model Capabilities on Closed-Book QA Tasks Using Only Information Available Prior to Training

Add code
Feb 06, 2025
Viaarxiv icon

Position: Towards a Responsible LLM-empowered Multi-Agent Systems

Add code
Feb 03, 2025
Viaarxiv icon

FALCON: Fine-grained Activation Manipulation by Contrastive Orthogonal Unalignment for Large Language Model

Add code
Feb 03, 2025
Viaarxiv icon

MRWeb: An Exploration of Generating Multi-Page Resource-Aware Web Code from UI Designs

Add code
Dec 19, 2024
Viaarxiv icon

Diverging Preferences: When do Annotators Disagree and do Models Know?

Add code
Oct 18, 2024
Figure 1 for Diverging Preferences: When do Annotators Disagree and do Models Know?
Figure 2 for Diverging Preferences: When do Annotators Disagree and do Models Know?
Figure 3 for Diverging Preferences: When do Annotators Disagree and do Models Know?
Figure 4 for Diverging Preferences: When do Annotators Disagree and do Models Know?
Viaarxiv icon

HelpSteer2-Preference: Complementing Ratings with Preferences

Add code
Oct 02, 2024
Viaarxiv icon

Adaptive Guardrails For Large Language Models via Trust Modeling and In-Context Learning

Add code
Aug 16, 2024
Viaarxiv icon

Automatically Generating UI Code from Screenshot: A Divide-and-Conquer-Based Approach

Add code
Jun 24, 2024
Viaarxiv icon

Nemotron-4 340B Technical Report

Add code
Jun 17, 2024
Figure 1 for Nemotron-4 340B Technical Report
Figure 2 for Nemotron-4 340B Technical Report
Figure 3 for Nemotron-4 340B Technical Report
Figure 4 for Nemotron-4 340B Technical Report
Viaarxiv icon

HelpSteer2: Open-source dataset for training top-performing reward models

Add code
Jun 12, 2024
Viaarxiv icon