Picture for Jia Cheng Hu

Jia Cheng Hu

Bidirectional Awareness Induction in Autoregressive Seq2Seq Models

Add code
Aug 25, 2024
Figure 1 for Bidirectional Awareness Induction in Autoregressive Seq2Seq Models
Figure 2 for Bidirectional Awareness Induction in Autoregressive Seq2Seq Models
Figure 3 for Bidirectional Awareness Induction in Autoregressive Seq2Seq Models
Figure 4 for Bidirectional Awareness Induction in Autoregressive Seq2Seq Models
Viaarxiv icon

Shifted Window Fourier Transform And Retention For Image Captioning

Add code
Aug 25, 2024
Viaarxiv icon

Heterogeneous Encoders Scaling In The Transformer For Neural Machine Translation

Add code
Dec 26, 2023
Viaarxiv icon

A request for clarity over the End of Sequence token in the Self-Critical Sequence Training

Add code
May 20, 2023
Viaarxiv icon

ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning

Add code
Aug 19, 2022
Figure 1 for ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning
Figure 2 for ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning
Figure 3 for ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning
Figure 4 for ExpansionNet v2: Block Static Expansion in fast end to end training for Image Captioning
Viaarxiv icon

ExpansionNet: exploring the sequence length bottleneck in the Transformer for Image Captioning

Add code
Jul 07, 2022
Figure 1 for ExpansionNet: exploring the sequence length bottleneck in the Transformer for Image Captioning
Figure 2 for ExpansionNet: exploring the sequence length bottleneck in the Transformer for Image Captioning
Figure 3 for ExpansionNet: exploring the sequence length bottleneck in the Transformer for Image Captioning
Figure 4 for ExpansionNet: exploring the sequence length bottleneck in the Transformer for Image Captioning
Viaarxiv icon