Picture for Weijie Liu

Weijie Liu

ORBIT: On-policy Exploration-Exploitation for Controllable Multi-Budget Reasoning

Add code
Jan 13, 2026
Viaarxiv icon

VI-MMRec: Similarity-Aware Training Cost-free Virtual User-Item Interactions for Multimodal Recommendation

Add code
Dec 09, 2025
Figure 1 for VI-MMRec: Similarity-Aware Training Cost-free Virtual User-Item Interactions for Multimodal Recommendation
Figure 2 for VI-MMRec: Similarity-Aware Training Cost-free Virtual User-Item Interactions for Multimodal Recommendation
Figure 3 for VI-MMRec: Similarity-Aware Training Cost-free Virtual User-Item Interactions for Multimodal Recommendation
Figure 4 for VI-MMRec: Similarity-Aware Training Cost-free Virtual User-Item Interactions for Multimodal Recommendation
Viaarxiv icon

EntroPIC: Towards Stable Long-Term Training of LLMs via Entropy Stabilization with Proportional-Integral Control

Add code
Nov 19, 2025
Viaarxiv icon

Do Not Step Into the Same River Twice: Learning to Reason from Trial and Error

Add code
Oct 30, 2025
Viaarxiv icon

Think Outside the Policy: In-Context Steered Policy Optimization

Add code
Oct 30, 2025
Viaarxiv icon

Hunyuan-TurboS: Advancing Large Language Models through Mamba-Transformer Synergy and Adaptive Chain-of-Thought

Add code
May 21, 2025
Viaarxiv icon

TACO: Tackling Over-correction in Federated Learning with Tailored Adaptive Correction

Add code
Apr 24, 2025
Viaarxiv icon

Many-to-Many Matching via Sparsity Controlled Optimal Transport

Add code
Mar 31, 2025
Figure 1 for Many-to-Many Matching via Sparsity Controlled Optimal Transport
Figure 2 for Many-to-Many Matching via Sparsity Controlled Optimal Transport
Figure 3 for Many-to-Many Matching via Sparsity Controlled Optimal Transport
Figure 4 for Many-to-Many Matching via Sparsity Controlled Optimal Transport
Viaarxiv icon

Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent

Add code
Nov 05, 2024
Figure 1 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 2 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 3 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Figure 4 for Hunyuan-Large: An Open-Source MoE Model with 52 Billion Activated Parameters by Tencent
Viaarxiv icon

FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients

Add code
Nov 04, 2024
Figure 1 for FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients
Figure 2 for FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients
Figure 3 for FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients
Figure 4 for FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients
Viaarxiv icon