Picture for Weihao Liu

Weihao Liu

Latent Thoughts Tuning: Bridging Context and Reasoning with Fused Information in Latent Tokens

Add code
Feb 10, 2026
Viaarxiv icon

Inference Computation Scaling for Feature Augmentation in Recommendation Systems

Add code
Feb 22, 2025
Viaarxiv icon

MuDAF: Long-Context Multi-Document Attention Focusing through Contrastive Learning on Attention Heads

Add code
Feb 19, 2025
Viaarxiv icon

Towards Truthful Multilingual Large Language Models: Benchmarking and Alignment Strategies

Add code
Jun 20, 2024
Viaarxiv icon

Cocktail: A Comprehensive Information Retrieval Benchmark with LLM-Generated Documents Integration

Add code
May 26, 2024
Figure 1 for Cocktail: A Comprehensive Information Retrieval Benchmark with LLM-Generated Documents Integration
Figure 2 for Cocktail: A Comprehensive Information Retrieval Benchmark with LLM-Generated Documents Integration
Figure 3 for Cocktail: A Comprehensive Information Retrieval Benchmark with LLM-Generated Documents Integration
Figure 4 for Cocktail: A Comprehensive Information Retrieval Benchmark with LLM-Generated Documents Integration
Viaarxiv icon

SPA: Towards A Computational Friendly Cloud-Base and On-Devices Collaboration Seq2seq Personalized Generation

Add code
Mar 11, 2024
Viaarxiv icon

ERA-CoT: Improving Chain-of-Thought through Entity Relationship Analysis

Add code
Mar 11, 2024
Figure 1 for ERA-CoT: Improving Chain-of-Thought through Entity Relationship Analysis
Figure 2 for ERA-CoT: Improving Chain-of-Thought through Entity Relationship Analysis
Figure 3 for ERA-CoT: Improving Chain-of-Thought through Entity Relationship Analysis
Figure 4 for ERA-CoT: Improving Chain-of-Thought through Entity Relationship Analysis
Viaarxiv icon

RA-ISF: Learning to Answer and Understand from Retrieval Augmentation via Iterative Self-Feedback

Add code
Mar 11, 2024
Viaarxiv icon

MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models

Add code
Feb 20, 2024
Figure 1 for MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models
Figure 2 for MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models
Figure 3 for MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models
Figure 4 for MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models
Viaarxiv icon

LLMs may Dominate Information Access: Neural Retrievers are Biased Towards LLM-Generated Texts

Add code
Oct 31, 2023
Viaarxiv icon