Picture for Dongjie Yang

Dongjie Yang

KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing

Add code
Oct 24, 2024
Figure 1 for KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing
Figure 2 for KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing
Figure 3 for KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing
Figure 4 for KVSharer: Efficient Inference via Layer-Wise Dissimilar KV Cache Sharing
Viaarxiv icon

Are LLMs Aware that Some Questions are not Open-ended?

Add code
Oct 01, 2024
Viaarxiv icon

Vript: A Video Is Worth Thousands of Words

Add code
Jun 10, 2024
Figure 1 for Vript: A Video Is Worth Thousands of Words
Figure 2 for Vript: A Video Is Worth Thousands of Words
Figure 3 for Vript: A Video Is Worth Thousands of Words
Figure 4 for Vript: A Video Is Worth Thousands of Words
Viaarxiv icon

PyramidInfer: Pyramid KV Cache Compression for High-throughput LLM Inference

Add code
May 21, 2024
Figure 1 for PyramidInfer: Pyramid KV Cache Compression for High-throughput LLM Inference
Figure 2 for PyramidInfer: Pyramid KV Cache Compression for High-throughput LLM Inference
Figure 3 for PyramidInfer: Pyramid KV Cache Compression for High-throughput LLM Inference
Figure 4 for PyramidInfer: Pyramid KV Cache Compression for High-throughput LLM Inference
Viaarxiv icon

BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained Transformer

Add code
Jul 01, 2023
Figure 1 for BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained Transformer
Figure 2 for BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained Transformer
Figure 3 for BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained Transformer
Viaarxiv icon

RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs

Add code
May 25, 2023
Viaarxiv icon

Learning Better Masking for Better Language Model Pre-training

Add code
Aug 23, 2022
Figure 1 for Learning Better Masking for Better Language Model Pre-training
Figure 2 for Learning Better Masking for Better Language Model Pre-training
Figure 3 for Learning Better Masking for Better Language Model Pre-training
Figure 4 for Learning Better Masking for Better Language Model Pre-training
Viaarxiv icon