Picture for Xiaozhong Liu

Xiaozhong Liu

Science Out of Its Ivory Tower: Improving Accessibility with Reinforcement Learning

Add code
Oct 22, 2024
Viaarxiv icon

LangGFM: A Large Language Model Alone Can be a Powerful Graph Foundation Model

Add code
Oct 19, 2024
Figure 1 for LangGFM: A Large Language Model Alone Can be a Powerful Graph Foundation Model
Figure 2 for LangGFM: A Large Language Model Alone Can be a Powerful Graph Foundation Model
Figure 3 for LangGFM: A Large Language Model Alone Can be a Powerful Graph Foundation Model
Figure 4 for LangGFM: A Large Language Model Alone Can be a Powerful Graph Foundation Model
Viaarxiv icon

A Speaker Turn-Aware Multi-Task Adversarial Network for Joint User Satisfaction Estimation and Sentiment Analysis

Add code
Oct 12, 2024
Figure 1 for A Speaker Turn-Aware Multi-Task Adversarial Network for Joint User Satisfaction Estimation and Sentiment Analysis
Figure 2 for A Speaker Turn-Aware Multi-Task Adversarial Network for Joint User Satisfaction Estimation and Sentiment Analysis
Figure 3 for A Speaker Turn-Aware Multi-Task Adversarial Network for Joint User Satisfaction Estimation and Sentiment Analysis
Figure 4 for A Speaker Turn-Aware Multi-Task Adversarial Network for Joint User Satisfaction Estimation and Sentiment Analysis
Viaarxiv icon

LLM Cascade with Multi-Objective Optimal Consideration

Add code
Oct 10, 2024
Viaarxiv icon

Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration

Add code
Oct 03, 2024
Figure 1 for Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration
Figure 2 for Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration
Figure 3 for Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration
Figure 4 for Can Large Language Models Grasp Legal Theories? Enhance Legal Reasoning with Insights from Multi-Agent Collaboration
Viaarxiv icon

PersonaMark: Personalized LLM watermarking for model protection and user attribution

Add code
Sep 15, 2024
Figure 1 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Figure 2 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Figure 3 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Figure 4 for PersonaMark: Personalized LLM watermarking for model protection and user attribution
Viaarxiv icon

Black-Box Opinion Manipulation Attacks to Retrieval-Augmented Generation of Large Language Models

Add code
Jul 18, 2024
Viaarxiv icon

Knowledge-Infused Legal Wisdom: Navigating LLM Consultation through the Lens of Diagnostics and Positive-Unlabeled Reinforcement Learning

Add code
Jun 05, 2024
Viaarxiv icon

Enhance Robustness of Language Models Against Variation Attack through Graph Integration

Add code
Apr 18, 2024
Figure 1 for Enhance Robustness of Language Models Against Variation Attack through Graph Integration
Figure 2 for Enhance Robustness of Language Models Against Variation Attack through Graph Integration
Figure 3 for Enhance Robustness of Language Models Against Variation Attack through Graph Integration
Figure 4 for Enhance Robustness of Language Models Against Variation Attack through Graph Integration
Viaarxiv icon

From Model-centered to Human-Centered: Revision Distance as a Metric for Text Evaluation in LLMs-based Applications

Add code
Apr 11, 2024
Viaarxiv icon