Picture for Xianzhen Luo

Xianzhen Luo

Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling

Add code
Aug 16, 2024
Figure 1 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 2 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 3 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 4 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Viaarxiv icon

Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training

Add code
Jun 25, 2024
Figure 1 for Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training
Figure 2 for Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training
Figure 3 for Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training
Figure 4 for Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training
Viaarxiv icon

Semi-Instruct: Bridging Natural-Instruct and Self-Instruct for Code Large Language Models

Add code
Mar 01, 2024
Viaarxiv icon

MultiPoT: Multilingual Program of Thoughts Harnesses Multiple Programming Languages

Add code
Feb 16, 2024
Viaarxiv icon

A Survey on Natural Language Processing for Programming

Add code
Dec 12, 2022
Viaarxiv icon

Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging

Add code
Apr 02, 2022
Figure 1 for Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging
Figure 2 for Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging
Figure 3 for Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging
Figure 4 for Inverse is Better! Fast and Accurate Prompt for Few-shot Slot Tagging
Viaarxiv icon