Picture for Quan Du

Quan Du

RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data

Add code
Aug 22, 2024
Figure 1 for RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data
Figure 2 for RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data
Figure 3 for RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data
Figure 4 for RoVRM: A Robust Visual Reward Model Optimized via Auxiliary Textual Preference Data
Viaarxiv icon

Learning Evaluation Models from Large Language Models for Sequence Generation

Add code
Aug 08, 2023
Viaarxiv icon

ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generation

Add code
Mar 17, 2022
Figure 1 for ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generation
Figure 2 for ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generation
Figure 3 for ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generation
Figure 4 for ODE Transformer: An Ordinary Differential Equation-Inspired Model for Sequence Generation
Viaarxiv icon

ODE Transformer: An Ordinary Differential Equation-Inspired Model for Neural Machine Translation

Add code
Apr 06, 2021
Figure 1 for ODE Transformer: An Ordinary Differential Equation-Inspired Model for Neural Machine Translation
Figure 2 for ODE Transformer: An Ordinary Differential Equation-Inspired Model for Neural Machine Translation
Figure 3 for ODE Transformer: An Ordinary Differential Equation-Inspired Model for Neural Machine Translation
Figure 4 for ODE Transformer: An Ordinary Differential Equation-Inspired Model for Neural Machine Translation
Viaarxiv icon

Learning Light-Weight Translation Models from Deep Transformer

Add code
Dec 27, 2020
Figure 1 for Learning Light-Weight Translation Models from Deep Transformer
Figure 2 for Learning Light-Weight Translation Models from Deep Transformer
Figure 3 for Learning Light-Weight Translation Models from Deep Transformer
Figure 4 for Learning Light-Weight Translation Models from Deep Transformer
Viaarxiv icon

A Simple and Effective Approach to Robust Unsupervised Bilingual Dictionary Induction

Add code
Nov 30, 2020
Figure 1 for A Simple and Effective Approach to Robust Unsupervised Bilingual Dictionary Induction
Figure 2 for A Simple and Effective Approach to Robust Unsupervised Bilingual Dictionary Induction
Figure 3 for A Simple and Effective Approach to Robust Unsupervised Bilingual Dictionary Induction
Figure 4 for A Simple and Effective Approach to Robust Unsupervised Bilingual Dictionary Induction
Viaarxiv icon

Shallow-to-Deep Training for Neural Machine Translation

Add code
Oct 08, 2020
Figure 1 for Shallow-to-Deep Training for Neural Machine Translation
Figure 2 for Shallow-to-Deep Training for Neural Machine Translation
Figure 3 for Shallow-to-Deep Training for Neural Machine Translation
Figure 4 for Shallow-to-Deep Training for Neural Machine Translation
Viaarxiv icon

Weight Distillation: Transferring the Knowledge in Neural Network Parameters

Add code
Sep 19, 2020
Figure 1 for Weight Distillation: Transferring the Knowledge in Neural Network Parameters
Figure 2 for Weight Distillation: Transferring the Knowledge in Neural Network Parameters
Figure 3 for Weight Distillation: Transferring the Knowledge in Neural Network Parameters
Figure 4 for Weight Distillation: Transferring the Knowledge in Neural Network Parameters
Viaarxiv icon