Picture for Yuyang Ding

Yuyang Ding

Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch

Add code
Oct 24, 2024
Viaarxiv icon

Boosting Large Language Models with Socratic Method for Conversational Mathematics Teaching

Add code
Jul 24, 2024
Figure 1 for Boosting Large Language Models with Socratic Method for Conversational Mathematics Teaching
Figure 2 for Boosting Large Language Models with Socratic Method for Conversational Mathematics Teaching
Figure 3 for Boosting Large Language Models with Socratic Method for Conversational Mathematics Teaching
Figure 4 for Boosting Large Language Models with Socratic Method for Conversational Mathematics Teaching
Viaarxiv icon

OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning

Add code
May 09, 2024
Figure 1 for OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Figure 2 for OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Figure 3 for OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Figure 4 for OpenBA-V2: Reaching 77.3% High Compression Ratio with Fast Multi-Stage Pruning
Viaarxiv icon

Rethinking Negative Instances for Generative Named Entity Recognition

Add code
Feb 26, 2024
Viaarxiv icon

Mathematical Language Models: A Survey

Add code
Dec 14, 2023
Figure 1 for Mathematical Language Models: A Survey
Figure 2 for Mathematical Language Models: A Survey
Figure 3 for Mathematical Language Models: A Survey
Figure 4 for Mathematical Language Models: A Survey
Viaarxiv icon

OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch

Add code
Oct 01, 2023
Figure 1 for OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch
Figure 2 for OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch
Figure 3 for OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch
Figure 4 for OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch
Viaarxiv icon

Detoxify Language Model Step-by-Step

Add code
Aug 16, 2023
Figure 1 for Detoxify Language Model Step-by-Step
Figure 2 for Detoxify Language Model Step-by-Step
Figure 3 for Detoxify Language Model Step-by-Step
Figure 4 for Detoxify Language Model Step-by-Step
Viaarxiv icon

Robust Question Answering against Distribution Shifts with Test-Time Adaptation: An Empirical Study

Add code
Feb 09, 2023
Figure 1 for Robust Question Answering against Distribution Shifts with Test-Time Adaptation: An Empirical Study
Figure 2 for Robust Question Answering against Distribution Shifts with Test-Time Adaptation: An Empirical Study
Figure 3 for Robust Question Answering against Distribution Shifts with Test-Time Adaptation: An Empirical Study
Figure 4 for Robust Question Answering against Distribution Shifts with Test-Time Adaptation: An Empirical Study
Viaarxiv icon

SelfMix: Robust Learning Against Textual Label Noise with Self-Mixup Training

Add code
Oct 11, 2022
Figure 1 for SelfMix: Robust Learning Against Textual Label Noise with Self-Mixup Training
Figure 2 for SelfMix: Robust Learning Against Textual Label Noise with Self-Mixup Training
Figure 3 for SelfMix: Robust Learning Against Textual Label Noise with Self-Mixup Training
Figure 4 for SelfMix: Robust Learning Against Textual Label Noise with Self-Mixup Training
Viaarxiv icon