Picture for Binbin Xie

Binbin Xie

MoE-CT: A Novel Approach For Large Language Models Training With Resistance To Catastrophic Forgetting

Add code
Jun 25, 2024
Figure 1 for MoE-CT: A Novel Approach For Large Language Models Training With Resistance To Catastrophic Forgetting
Figure 2 for MoE-CT: A Novel Approach For Large Language Models Training With Resistance To Catastrophic Forgetting
Figure 3 for MoE-CT: A Novel Approach For Large Language Models Training With Resistance To Catastrophic Forgetting
Figure 4 for MoE-CT: A Novel Approach For Large Language Models Training With Resistance To Catastrophic Forgetting
Viaarxiv icon

PolyLM: An Open Source Polyglot Large Language Model

Add code
Jul 12, 2023
Figure 1 for PolyLM: An Open Source Polyglot Large Language Model
Figure 2 for PolyLM: An Open Source Polyglot Large Language Model
Figure 3 for PolyLM: An Open Source Polyglot Large Language Model
Figure 4 for PolyLM: An Open Source Polyglot Large Language Model
Viaarxiv icon

From Statistical Methods to Deep Learning, Automatic Keyphrase Prediction: A Survey

Add code
May 04, 2023
Viaarxiv icon

WR-ONE2SET: Towards Well-Calibrated Keyphrase Generation

Add code
Nov 13, 2022
Viaarxiv icon

Improving Tree-Structured Decoder Training for Code Generation via Mutual Learning

Add code
May 31, 2021
Figure 1 for Improving Tree-Structured Decoder Training for Code Generation via Mutual Learning
Figure 2 for Improving Tree-Structured Decoder Training for Code Generation via Mutual Learning
Figure 3 for Improving Tree-Structured Decoder Training for Code Generation via Mutual Learning
Figure 4 for Improving Tree-Structured Decoder Training for Code Generation via Mutual Learning
Viaarxiv icon