Picture for Dongliang Xu

Dongliang Xu

Advancing Large Language Model Attribution through Self-Improving

Add code
Oct 17, 2024
Viaarxiv icon

GlobeSumm: A Challenging Benchmark Towards Unifying Multi-lingual, Cross-lingual and Multi-document News Summarization

Add code
Oct 05, 2024
Figure 1 for GlobeSumm: A Challenging Benchmark Towards Unifying Multi-lingual, Cross-lingual and Multi-document News Summarization
Figure 2 for GlobeSumm: A Challenging Benchmark Towards Unifying Multi-lingual, Cross-lingual and Multi-document News Summarization
Figure 3 for GlobeSumm: A Challenging Benchmark Towards Unifying Multi-lingual, Cross-lingual and Multi-document News Summarization
Figure 4 for GlobeSumm: A Challenging Benchmark Towards Unifying Multi-lingual, Cross-lingual and Multi-document News Summarization
Viaarxiv icon

Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling

Add code
Aug 16, 2024
Figure 1 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 2 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 3 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 4 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Viaarxiv icon

Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training

Add code
Jun 25, 2024
Viaarxiv icon

MoGU: A Framework for Enhancing Safety of Open-Sourced LLMs While Preserving Their Usability

Add code
May 23, 2024
Viaarxiv icon

Meaningful Learning: Advancing Abstract Reasoning in Large Language Models via Generic Fact Guidance

Add code
Mar 14, 2024
Viaarxiv icon

Semi-Instruct: Bridging Natural-Instruct and Self-Instruct for Code Large Language Models

Add code
Mar 01, 2024
Viaarxiv icon

MultiPoT: Multilingual Program of Thoughts Harnesses Multiple Programming Languages

Add code
Feb 16, 2024
Viaarxiv icon

DAPT: A Dual Attention Framework for Parameter-Efficient Continual Learning of Large Language Models

Add code
Jan 16, 2024
Viaarxiv icon

XuanYuan 2.0: A Large Chinese Financial Chat Model with Hundreds of Billions Parameters

Add code
May 19, 2023
Viaarxiv icon