Picture for Jianxun Lian

Jianxun Lian

Microsoft Research Asia

HumanLLM: Towards Personalized Understanding and Simulation of Human Nature

Add code
Jan 22, 2026
Viaarxiv icon

Why not Collaborative Filtering in Dual View? Bridging Sparse and Dense Models

Add code
Jan 14, 2026
Viaarxiv icon

NAMeGEn: Creative Name Generation via A Novel Agent-based Multiple Personalized Goal Enhancement Framework

Add code
Nov 19, 2025
Viaarxiv icon

Learning Pluralistic User Preferences through Reinforcement Learning Fine-tuned Summaries

Add code
Jul 17, 2025
Figure 1 for Learning Pluralistic User Preferences through Reinforcement Learning Fine-tuned Summaries
Figure 2 for Learning Pluralistic User Preferences through Reinforcement Learning Fine-tuned Summaries
Figure 3 for Learning Pluralistic User Preferences through Reinforcement Learning Fine-tuned Summaries
Figure 4 for Learning Pluralistic User Preferences through Reinforcement Learning Fine-tuned Summaries
Viaarxiv icon

MotiveBench: How Far Are We From Human-Like Motivational Reasoning in Large Language Models?

Add code
Jun 16, 2025
Figure 1 for MotiveBench: How Far Are We From Human-Like Motivational Reasoning in Large Language Models?
Figure 2 for MotiveBench: How Far Are We From Human-Like Motivational Reasoning in Large Language Models?
Figure 3 for MotiveBench: How Far Are We From Human-Like Motivational Reasoning in Large Language Models?
Figure 4 for MotiveBench: How Far Are We From Human-Like Motivational Reasoning in Large Language Models?
Viaarxiv icon

Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study

Add code
Jun 16, 2025
Figure 1 for Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study
Figure 2 for Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study
Figure 3 for Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study
Figure 4 for Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study
Viaarxiv icon

Avoid Recommending Out-of-Domain Items: Constrained Generative Recommendation with LLMs

Add code
May 06, 2025
Viaarxiv icon

LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System

Add code
Jan 27, 2025
Figure 1 for LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System
Figure 2 for LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System
Figure 3 for LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System
Figure 4 for LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System
Viaarxiv icon

The Road to Artificial SuperIntelligence: A Comprehensive Survey of Superalignment

Add code
Dec 24, 2024
Figure 1 for The Road to Artificial SuperIntelligence: A Comprehensive Survey of Superalignment
Figure 2 for The Road to Artificial SuperIntelligence: A Comprehensive Survey of Superalignment
Figure 3 for The Road to Artificial SuperIntelligence: A Comprehensive Survey of Superalignment
Figure 4 for The Road to Artificial SuperIntelligence: A Comprehensive Survey of Superalignment
Viaarxiv icon

TrendSim: Simulating Trending Topics in Social Media Under Poisoning Attacks with LLM-based Multi-agent System

Add code
Dec 14, 2024
Figure 1 for TrendSim: Simulating Trending Topics in Social Media Under Poisoning Attacks with LLM-based Multi-agent System
Figure 2 for TrendSim: Simulating Trending Topics in Social Media Under Poisoning Attacks with LLM-based Multi-agent System
Figure 3 for TrendSim: Simulating Trending Topics in Social Media Under Poisoning Attacks with LLM-based Multi-agent System
Figure 4 for TrendSim: Simulating Trending Topics in Social Media Under Poisoning Attacks with LLM-based Multi-agent System
Viaarxiv icon