Picture for Ru Peng

Ru Peng

Predicting Rewards Alongside Tokens: Non-disruptive Parameter Insertion for Efficient Inference Intervention in Large Language Model

Add code
Aug 20, 2024
Viaarxiv icon

Qwen2 Technical Report

Add code
Jul 16, 2024
Figure 1 for Qwen2 Technical Report
Figure 2 for Qwen2 Technical Report
Figure 3 for Qwen2 Technical Report
Figure 4 for Qwen2 Technical Report
Viaarxiv icon

DotaMath: Decomposition of Thought with Code Assistance and Self-correction for Mathematical Reasoning

Add code
Jul 04, 2024
Viaarxiv icon

Inference-Time Decontamination: Reusing Leaked Benchmarks for Large Language Model Evaluation

Add code
Jun 20, 2024
Viaarxiv icon

DORY: Deliberative Prompt Recovery for LLM

Add code
May 31, 2024
Viaarxiv icon

Energy-based Automated Model Evaluation

Add code
Jan 25, 2024
Figure 1 for Energy-based Automated Model Evaluation
Figure 2 for Energy-based Automated Model Evaluation
Figure 3 for Energy-based Automated Model Evaluation
Figure 4 for Energy-based Automated Model Evaluation
Viaarxiv icon

CAME: Contrastive Automated Model Evaluation

Add code
Aug 22, 2023
Figure 1 for CAME: Contrastive Automated Model Evaluation
Figure 2 for CAME: Contrastive Automated Model Evaluation
Figure 3 for CAME: Contrastive Automated Model Evaluation
Figure 4 for CAME: Contrastive Automated Model Evaluation
Viaarxiv icon

Better Sign Language Translation with Monolingual Data

Add code
Apr 21, 2023
Viaarxiv icon

Distill the Image to Nowhere: Inversion Knowledge Distillation for Multimodal Machine Translation

Add code
Oct 10, 2022
Figure 1 for Distill the Image to Nowhere: Inversion Knowledge Distillation for Multimodal Machine Translation
Figure 2 for Distill the Image to Nowhere: Inversion Knowledge Distillation for Multimodal Machine Translation
Figure 3 for Distill the Image to Nowhere: Inversion Knowledge Distillation for Multimodal Machine Translation
Figure 4 for Distill the Image to Nowhere: Inversion Knowledge Distillation for Multimodal Machine Translation
Viaarxiv icon

Boosting Neural Machine Translation with Dependency-Scaled Self-Attention Network

Add code
Nov 25, 2021
Figure 1 for Boosting Neural Machine Translation with Dependency-Scaled Self-Attention Network
Figure 2 for Boosting Neural Machine Translation with Dependency-Scaled Self-Attention Network
Figure 3 for Boosting Neural Machine Translation with Dependency-Scaled Self-Attention Network
Figure 4 for Boosting Neural Machine Translation with Dependency-Scaled Self-Attention Network
Viaarxiv icon