Picture for Ying-Chun Lin

Ying-Chun Lin

Rethinking Node Representation Interpretation through Relation Coherence

Add code
Nov 01, 2024
Viaarxiv icon

Improving Node Representation by Boosting Target-Aware Contrastive Loss

Add code
Oct 04, 2024
Viaarxiv icon

WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback

Add code
Aug 28, 2024
Figure 1 for WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback
Figure 2 for WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback
Figure 3 for WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback
Figure 4 for WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback
Viaarxiv icon

Interpretable User Satisfaction Estimation for Conversational Systems with Large Language Models

Add code
Mar 19, 2024
Figure 1 for Interpretable User Satisfaction Estimation for Conversational Systems with Large Language Models
Figure 2 for Interpretable User Satisfaction Estimation for Conversational Systems with Large Language Models
Figure 3 for Interpretable User Satisfaction Estimation for Conversational Systems with Large Language Models
Figure 4 for Interpretable User Satisfaction Estimation for Conversational Systems with Large Language Models
Viaarxiv icon