Picture for Lichang Chen

Lichang Chen

OmnixR: Evaluating Omni-modality Language Models on Reasoning across Modalities

Add code
Oct 16, 2024
Viaarxiv icon

From Lists to Emojis: How Format Bias Affects Model Alignment

Add code
Sep 18, 2024
Viaarxiv icon

OPTune: Efficient Online Preference Tuning

Add code
Jun 11, 2024
Figure 1 for OPTune: Efficient Online Preference Tuning
Figure 2 for OPTune: Efficient Online Preference Tuning
Figure 3 for OPTune: Efficient Online Preference Tuning
Figure 4 for OPTune: Efficient Online Preference Tuning
Viaarxiv icon

Unveiling the Impact of Coding Data Instruction Fine-Tuning on Large Language Models Reasoning

Add code
May 30, 2024
Viaarxiv icon

Spectrum AUC Difference (SAUCD): Human-aligned 3D Shape Evaluation

Add code
Mar 03, 2024
Viaarxiv icon

Your Vision-Language Model Itself Is a Strong Filter: Towards High-Quality Instruction Tuning with Data Selection

Add code
Feb 19, 2024
Viaarxiv icon

Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate Controllable Controversial Statements

Add code
Feb 16, 2024
Viaarxiv icon

Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning

Add code
Feb 15, 2024
Figure 1 for Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
Figure 2 for Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
Figure 3 for Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
Figure 4 for Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning
Viaarxiv icon

ODIN: Disentangled Reward Mitigates Hacking in RLHF

Add code
Feb 11, 2024
Viaarxiv icon

GPT-4 Vision on Medical Image Classification -- A Case Study on COVID-19 Dataset

Add code
Oct 27, 2023
Viaarxiv icon