Picture for Yanjun Chen

Yanjun Chen

Sid

Instruction-Tuned LLMs Succeed in Document-Level MT Without Fine-Tuning -- But BLEU Turns a Blind Eye

Add code
Oct 29, 2024
Figure 1 for Instruction-Tuned LLMs Succeed in Document-Level MT Without Fine-Tuning -- But BLEU Turns a Blind Eye
Figure 2 for Instruction-Tuned LLMs Succeed in Document-Level MT Without Fine-Tuning -- But BLEU Turns a Blind Eye
Figure 3 for Instruction-Tuned LLMs Succeed in Document-Level MT Without Fine-Tuning -- But BLEU Turns a Blind Eye
Figure 4 for Instruction-Tuned LLMs Succeed in Document-Level MT Without Fine-Tuning -- But BLEU Turns a Blind Eye
Viaarxiv icon

Corrected Soft Actor Critic for Continuous Control

Add code
Oct 22, 2024
Figure 1 for Corrected Soft Actor Critic for Continuous Control
Figure 2 for Corrected Soft Actor Critic for Continuous Control
Figure 3 for Corrected Soft Actor Critic for Continuous Control
Figure 4 for Corrected Soft Actor Critic for Continuous Control
Viaarxiv icon

The Accuracy Paradox in RLHF: When Better Reward Models Don't Yield Better Language Models

Add code
Oct 09, 2024
Viaarxiv icon

The Llama 3 Herd of Models

Add code
Jul 31, 2024
Viaarxiv icon

CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents

Add code
Jul 01, 2024
Figure 1 for CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Figure 2 for CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Figure 3 for CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Figure 4 for CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents
Viaarxiv icon

Towards Diverse Temporal Grounding under Single Positive Labels

Add code
Mar 12, 2023
Viaarxiv icon

VIMA: General Robot Manipulation with Multimodal Prompts

Add code
Oct 06, 2022
Figure 1 for VIMA: General Robot Manipulation with Multimodal Prompts
Figure 2 for VIMA: General Robot Manipulation with Multimodal Prompts
Figure 3 for VIMA: General Robot Manipulation with Multimodal Prompts
Figure 4 for VIMA: General Robot Manipulation with Multimodal Prompts
Viaarxiv icon

Embracing Uncertainty: Decoupling and De-bias for Robust Temporal Grounding

Add code
Mar 31, 2021
Figure 1 for Embracing Uncertainty: Decoupling and De-bias for Robust Temporal Grounding
Figure 2 for Embracing Uncertainty: Decoupling and De-bias for Robust Temporal Grounding
Figure 3 for Embracing Uncertainty: Decoupling and De-bias for Robust Temporal Grounding
Figure 4 for Embracing Uncertainty: Decoupling and De-bias for Robust Temporal Grounding
Viaarxiv icon