Picture for Xianglong Liu

Xianglong Liu

AFTER: Mitigating the Object Hallucination of LVLM via Adaptive Factual-Guided Activation Editing

Add code
Jan 05, 2026
Viaarxiv icon

M2G-Eval: Enhancing and Evaluating Multi-granularity Multilingual Code Generation

Add code
Dec 27, 2025
Viaarxiv icon

Context as a Tool: Context Management for Long-Horizon SWE-Agents

Add code
Dec 26, 2025
Viaarxiv icon

RoboSafe: Safeguarding Embodied Agents via Executable Safety Logic

Add code
Dec 24, 2025
Viaarxiv icon

CodeSimpleQA: Scaling Factuality in Code Large Language Models

Add code
Dec 22, 2025
Viaarxiv icon

UCoder: Unsupervised Code Generation by Internal Probing of Large Language Models

Add code
Dec 19, 2025
Figure 1 for UCoder: Unsupervised Code Generation by Internal Probing of Large Language Models
Figure 2 for UCoder: Unsupervised Code Generation by Internal Probing of Large Language Models
Figure 3 for UCoder: Unsupervised Code Generation by Internal Probing of Large Language Models
Figure 4 for UCoder: Unsupervised Code Generation by Internal Probing of Large Language Models
Viaarxiv icon

WeMusic-Agent: Efficient Conversational Music Recommendation via Knowledge Internalization and Agentic Boundary Learning

Add code
Dec 18, 2025
Viaarxiv icon

Scaling Laws for Code: Every Programming Language Matters

Add code
Dec 15, 2025
Figure 1 for Scaling Laws for Code: Every Programming Language Matters
Figure 2 for Scaling Laws for Code: Every Programming Language Matters
Figure 3 for Scaling Laws for Code: Every Programming Language Matters
Figure 4 for Scaling Laws for Code: Every Programming Language Matters
Viaarxiv icon

MoDES: Accelerating Mixture-of-Experts Multimodal Large Language Models via Dynamic Expert Skipping

Add code
Nov 19, 2025
Figure 1 for MoDES: Accelerating Mixture-of-Experts Multimodal Large Language Models via Dynamic Expert Skipping
Figure 2 for MoDES: Accelerating Mixture-of-Experts Multimodal Large Language Models via Dynamic Expert Skipping
Figure 3 for MoDES: Accelerating Mixture-of-Experts Multimodal Large Language Models via Dynamic Expert Skipping
Figure 4 for MoDES: Accelerating Mixture-of-Experts Multimodal Large Language Models via Dynamic Expert Skipping
Viaarxiv icon

SLMQuant:Benchmarking Small Language Model Quantization for Practical Deployment

Add code
Nov 17, 2025
Viaarxiv icon