Picture for Dongwei Jiang

Dongwei Jiang

Upsample or Upweight? Balanced Training on Heavily Imbalanced Datasets

Add code
Oct 06, 2024
Viaarxiv icon

RATIONALYST: Pre-training Process-Supervision for Improving Reasoning

Add code
Oct 01, 2024
Viaarxiv icon

To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning

Add code
Sep 18, 2024
Figure 1 for To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
Figure 2 for To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
Figure 3 for To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
Figure 4 for To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
Viaarxiv icon

Benchmarking Language Model Creativity: A Case Study on Code Generation

Add code
Jul 12, 2024
Viaarxiv icon

SELF-CORRECT: LLMs Struggle with Refining Self-Generated Responses

Add code
Apr 04, 2024
Viaarxiv icon

LeanReasoner: Boosting Complex Logical Reasoning with Lean

Add code
Mar 20, 2024
Viaarxiv icon

Enhancing Systematic Decompositional Natural Language Inference Using Informal Logic

Add code
Feb 27, 2024
Viaarxiv icon

Speech SIMCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation Learning

Add code
Oct 27, 2020
Figure 1 for Speech SIMCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation Learning
Figure 2 for Speech SIMCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation Learning
Figure 3 for Speech SIMCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation Learning
Figure 4 for Speech SIMCLR: Combining Contrastive and Reconstruction Objective for Self-supervised Speech Representation Learning
Viaarxiv icon

TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog

Add code
Oct 21, 2020
Figure 1 for TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog
Figure 2 for TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog
Figure 3 for TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog
Figure 4 for TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog
Viaarxiv icon

A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition

Add code
Jun 23, 2020
Figure 1 for A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition
Figure 2 for A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition
Figure 3 for A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition
Figure 4 for A Further Study of Unsupervised Pre-training for Transformer Based Speech Recognition
Viaarxiv icon