Picture for Xiaoyuan Yi

Xiaoyuan Yi

Leveraging Implicit Sentiments: Enhancing Reliability and Validity in Psychological Trait Evaluation of LLMs

Add code
Mar 26, 2025
Viaarxiv icon

Research on Superalignment Should Advance Now with Parallel Optimization of Competence and Conformity

Add code
Mar 08, 2025
Viaarxiv icon

PIP-KAG: Mitigating Knowledge Conflicts in Knowledge-Augmented Generation via Parametric Pruning

Add code
Feb 21, 2025
Viaarxiv icon

Value Compass Leaderboard: A Platform for Fundamental and Validated Evaluation of LLMs Values

Add code
Jan 13, 2025
Viaarxiv icon

The Road to Artificial SuperIntelligence: A Comprehensive Survey of Superalignment

Add code
Dec 24, 2024
Figure 1 for The Road to Artificial SuperIntelligence: A Comprehensive Survey of Superalignment
Figure 2 for The Road to Artificial SuperIntelligence: A Comprehensive Survey of Superalignment
Figure 3 for The Road to Artificial SuperIntelligence: A Comprehensive Survey of Superalignment
Figure 4 for The Road to Artificial SuperIntelligence: A Comprehensive Survey of Superalignment
Viaarxiv icon

Embedding an Ethical Mind: Aligning Text-to-Image Synthesis via Lightweight Value Optimization

Add code
Oct 16, 2024
Figure 1 for Embedding an Ethical Mind: Aligning Text-to-Image Synthesis via Lightweight Value Optimization
Figure 2 for Embedding an Ethical Mind: Aligning Text-to-Image Synthesis via Lightweight Value Optimization
Figure 3 for Embedding an Ethical Mind: Aligning Text-to-Image Synthesis via Lightweight Value Optimization
Figure 4 for Embedding an Ethical Mind: Aligning Text-to-Image Synthesis via Lightweight Value Optimization
Viaarxiv icon

CLAVE: An Adaptive Framework for Evaluating Values of LLM Generated Responses

Add code
Jul 15, 2024
Figure 1 for CLAVE: An Adaptive Framework for Evaluating Values of LLM Generated Responses
Figure 2 for CLAVE: An Adaptive Framework for Evaluating Values of LLM Generated Responses
Figure 3 for CLAVE: An Adaptive Framework for Evaluating Values of LLM Generated Responses
Figure 4 for CLAVE: An Adaptive Framework for Evaluating Values of LLM Generated Responses
Viaarxiv icon

Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing

Add code
Jun 20, 2024
Figure 1 for Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing
Figure 2 for Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing
Figure 3 for Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing
Figure 4 for Raising the Bar: Investigating the Values of Large Language Models via Generative Evolving Testing
Viaarxiv icon

Multi-Evidence based Fact Verification via A Confidential Graph Neural Network

Add code
May 17, 2024
Figure 1 for Multi-Evidence based Fact Verification via A Confidential Graph Neural Network
Figure 2 for Multi-Evidence based Fact Verification via A Confidential Graph Neural Network
Figure 3 for Multi-Evidence based Fact Verification via A Confidential Graph Neural Network
Figure 4 for Multi-Evidence based Fact Verification via A Confidential Graph Neural Network
Viaarxiv icon

Beyond Human Norms: Unveiling Unique Values of Large Language Models through Interdisciplinary Approaches

Add code
Apr 19, 2024
Viaarxiv icon