Picture for Xuyang Liu

Xuyang Liu

OmniSIFT: Modality-Asymmetric Token Compression for Efficient Omni-modal Large Language Models

Add code
Feb 04, 2026
Viaarxiv icon

Structure-based RNA Design by Step-wise Optimization of Latent Diffusion Model

Add code
Jan 27, 2026
Viaarxiv icon

IPCV: Information-Preserving Compression for MLLM Visual Encoders

Add code
Dec 21, 2025
Figure 1 for IPCV: Information-Preserving Compression for MLLM Visual Encoders
Figure 2 for IPCV: Information-Preserving Compression for MLLM Visual Encoders
Figure 3 for IPCV: Information-Preserving Compression for MLLM Visual Encoders
Figure 4 for IPCV: Information-Preserving Compression for MLLM Visual Encoders
Viaarxiv icon

Mixing Importance with Diversity: Joint Optimization for KV Cache Compression in Large Vision-Language Models

Add code
Oct 23, 2025
Viaarxiv icon

AI for Service: Proactive Assistance with AI Glasses

Add code
Oct 16, 2025
Viaarxiv icon

Shifting AI Efficiency From Model-Centric to Data-Centric Compression

Add code
May 25, 2025
Viaarxiv icon

Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models

Add code
May 20, 2025
Viaarxiv icon

Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm Perception in Large Vision-Language Models

Add code
Mar 15, 2025
Figure 1 for Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm Perception in Large Vision-Language Models
Figure 2 for Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm Perception in Large Vision-Language Models
Figure 3 for Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm Perception in Large Vision-Language Models
Figure 4 for Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm Perception in Large Vision-Language Models
Viaarxiv icon

Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration

Add code
Jan 09, 2025
Figure 1 for Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration
Figure 2 for Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration
Figure 3 for Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration
Figure 4 for Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration
Viaarxiv icon

Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration

Add code
Nov 26, 2024
Figure 1 for Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration
Figure 2 for Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration
Figure 3 for Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration
Figure 4 for Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration
Viaarxiv icon