Picture for Zhuoshi Pan

Zhuoshi Pan

TokenSelect: Efficient Long-Context Inference and Length Extrapolation for LLMs via Dynamic Token-Level KV Cache Selection

Add code
Nov 05, 2024
Figure 1 for TokenSelect: Efficient Long-Context Inference and Length Extrapolation for LLMs via Dynamic Token-Level KV Cache Selection
Figure 2 for TokenSelect: Efficient Long-Context Inference and Length Extrapolation for LLMs via Dynamic Token-Level KV Cache Selection
Figure 3 for TokenSelect: Efficient Long-Context Inference and Length Extrapolation for LLMs via Dynamic Token-Level KV Cache Selection
Figure 4 for TokenSelect: Efficient Long-Context Inference and Length Extrapolation for LLMs via Dynamic Token-Level KV Cache Selection
Viaarxiv icon

LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression

Add code
Mar 19, 2024
Figure 1 for LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression
Figure 2 for LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression
Figure 3 for LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression
Figure 4 for LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression
Viaarxiv icon

From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models

Add code
Nov 04, 2023
Viaarxiv icon