Picture for Lili Qiu

Lili Qiu

VoLUT: Efficient Volumetric streaming enhanced by LUT-based super-resolution

Add code
Feb 17, 2025
Viaarxiv icon

On Memory Construction and Retrieval for Personalized Conversational Agents

Add code
Feb 08, 2025
Viaarxiv icon

SCBench: A KV Cache-Centric Analysis of Long-Context Methods

Add code
Dec 13, 2024
Figure 1 for SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Figure 2 for SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Figure 3 for SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Figure 4 for SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Viaarxiv icon

LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation

Add code
Nov 26, 2024
Figure 1 for LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation
Figure 2 for LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation
Figure 3 for LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation
Figure 4 for LLM2CLIP: Powerful Language Model Unlocks Richer Visual Representation
Viaarxiv icon

LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation

Add code
Nov 07, 2024
Figure 1 for LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation
Figure 2 for LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation
Figure 3 for LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation
Figure 4 for LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation
Viaarxiv icon

The Potential and Value of AI Chatbot in Personalized Cognitive Training

Add code
Oct 25, 2024
Figure 1 for The Potential and Value of AI Chatbot in Personalized Cognitive Training
Figure 2 for The Potential and Value of AI Chatbot in Personalized Cognitive Training
Figure 3 for The Potential and Value of AI Chatbot in Personalized Cognitive Training
Viaarxiv icon

RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval

Add code
Sep 16, 2024
Figure 1 for RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval
Figure 2 for RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval
Figure 3 for RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval
Figure 4 for RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval
Viaarxiv icon

Advancing Multi-Modal Sensing Through Expandable Modality Alignment

Add code
Jul 25, 2024
Viaarxiv icon

MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention

Add code
Jul 02, 2024
Figure 1 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Figure 2 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Figure 3 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Figure 4 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Viaarxiv icon

Expressive and Generalizable Low-rank Adaptation for Large Models via Slow Cascaded Learning

Add code
Jul 01, 2024
Viaarxiv icon