Picture for Zhengyi Yang

Zhengyi Yang

A2RAG: Adaptive Agentic Graph Retrieval for Cost-Aware and Reliable Reasoning

Add code
Jan 29, 2026
Viaarxiv icon

Beyond Linearization: Attributed Table Graphs for Table Reasoning

Add code
Jan 13, 2026
Viaarxiv icon

OrchANN: A Unified I/O Orchestration Framework for Skewed Out-of-Core Vector Search

Add code
Dec 28, 2025
Viaarxiv icon

Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models

Add code
Aug 01, 2025
Figure 1 for Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models
Figure 2 for Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models
Figure 3 for Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models
Figure 4 for Do They Understand Them? An Updated Evaluation on Nonbinary Pronoun Handling in Large Language Models
Viaarxiv icon

CLGNN: A Contrastive Learning-based GNN Model for Betweenness Centrality Prediction on Temporal Graphs

Add code
Jun 17, 2025
Viaarxiv icon

Addressing Missing Data Issue for Diffusion-based Recommendation

Add code
May 18, 2025
Viaarxiv icon

Graphy'our Data: Towards End-to-End Modeling, Exploring and Generating Report from Raw Data

Add code
Feb 24, 2025
Figure 1 for Graphy'our Data: Towards End-to-End Modeling, Exploring and Generating Report from Raw Data
Figure 2 for Graphy'our Data: Towards End-to-End Modeling, Exploring and Generating Report from Raw Data
Figure 3 for Graphy'our Data: Towards End-to-End Modeling, Exploring and Generating Report from Raw Data
Viaarxiv icon

$α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs

Add code
Oct 14, 2024
Figure 1 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Figure 2 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Figure 3 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Figure 4 for $α$-DPO: Adaptive Reward Margin is What Direct Preference Optimization Needs
Viaarxiv icon

$β$-DPO: Direct Preference Optimization with Dynamic $β$

Add code
Jul 11, 2024
Viaarxiv icon

Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization

Add code
Jul 10, 2024
Figure 1 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Figure 2 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Figure 3 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Figure 4 for Towards Robust Alignment of Language Models: Distributionally Robustifying Direct Preference Optimization
Viaarxiv icon