Picture for Xianfeng Tang

Xianfeng Tang

ResMoE: Space-efficient Compression of Mixture of Experts LLMs via Residual Restoration

Add code
Mar 10, 2025
Viaarxiv icon

Cite Before You Speak: Enhancing Context-Response Grounding in E-commerce Conversational LLM-Agents

Add code
Mar 05, 2025
Viaarxiv icon

How Far are LLMs from Real Search? A Comprehensive Study on Efficiency, Completeness, and Inherent Capabilities

Add code
Feb 26, 2025
Viaarxiv icon

A General Framework to Enhance Fine-tuning-based LLM Unlearning

Add code
Feb 25, 2025
Viaarxiv icon

Stepwise Perplexity-Guided Refinement for Efficient Chain-of-Thought Reasoning in Large Language Models

Add code
Feb 18, 2025
Viaarxiv icon

IHEval: Evaluating Language Models on Following the Instruction Hierarchy

Add code
Feb 12, 2025
Viaarxiv icon

Reasoning with Graphs: Structuring Implicit Knowledge to Enhance LLMs Reasoning

Add code
Jan 14, 2025
Viaarxiv icon

Retrieval-Augmented Generation with Graphs (GraphRAG)

Add code
Jan 08, 2025
Figure 1 for Retrieval-Augmented Generation with Graphs (GraphRAG)
Figure 2 for Retrieval-Augmented Generation with Graphs (GraphRAG)
Figure 3 for Retrieval-Augmented Generation with Graphs (GraphRAG)
Figure 4 for Retrieval-Augmented Generation with Graphs (GraphRAG)
Viaarxiv icon

A Survey of Calibration Process for Black-Box LLMs

Add code
Dec 17, 2024
Figure 1 for A Survey of Calibration Process for Black-Box LLMs
Figure 2 for A Survey of Calibration Process for Black-Box LLMs
Figure 3 for A Survey of Calibration Process for Black-Box LLMs
Figure 4 for A Survey of Calibration Process for Black-Box LLMs
Viaarxiv icon

Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data

Add code
Nov 12, 2024
Figure 1 for Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data
Figure 2 for Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data
Figure 3 for Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data
Figure 4 for Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data
Viaarxiv icon