Picture for Yongyu Mu

Yongyu Mu

Boosting Text-To-Image Generation via Multilingual Prompting in Large Multimodal Models

Add code
Jan 13, 2025
Viaarxiv icon

SLAM: Towards Efficient Multilingual Reasoning via Selective Language Alignment

Add code
Jan 07, 2025
Viaarxiv icon

LRHP: Learning Representations for Human Preferences via Preference Pairs

Add code
Oct 06, 2024
Viaarxiv icon

RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data

Add code
Aug 22, 2024
Figure 1 for RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data
Figure 2 for RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data
Figure 3 for RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data
Figure 4 for RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data
Viaarxiv icon

Cross-layer Attention Sharing for Large Language Models

Add code
Aug 04, 2024
Figure 1 for Cross-layer Attention Sharing for Large Language Models
Figure 2 for Cross-layer Attention Sharing for Large Language Models
Figure 3 for Cross-layer Attention Sharing for Large Language Models
Figure 4 for Cross-layer Attention Sharing for Large Language Models
Viaarxiv icon

Translate-and-Revise: Boosting Large Language Models for Constrained Translation

Add code
Jul 18, 2024
Viaarxiv icon

Hybrid Alignment Training for Large Language Models

Add code
Jun 21, 2024
Viaarxiv icon

Large Language Models are Parallel Multilingual Learners

Add code
Mar 14, 2024
Viaarxiv icon

Augmenting Large Language Model Translators via Translation Memories

Add code
May 27, 2023
Viaarxiv icon

Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection

Add code
Feb 01, 2023
Figure 1 for Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection
Figure 2 for Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection
Figure 3 for Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection
Figure 4 for Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection
Viaarxiv icon