Picture for Taiwei Shi

Taiwei Shi

WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback

Add code
Aug 28, 2024
Figure 1 for WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback
Figure 2 for WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback
Figure 3 for WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback
Figure 4 for WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback
Viaarxiv icon

How Susceptible are Large Language Models to Ideological Manipulation?

Add code
Feb 22, 2024
Viaarxiv icon

Can Language Model Moderators Improve the Health of Online Discourse?

Add code
Nov 16, 2023
Viaarxiv icon

Safer-Instruct: Aligning Language Models with Automated Preference Data

Add code
Nov 15, 2023
Viaarxiv icon

CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation

Add code
Oct 24, 2023
Viaarxiv icon

Neural Story Planning

Add code
Dec 16, 2022
Viaarxiv icon