Picture for Guangxuan Xu

Guangxuan Xu

Fantastic LLMs for Preference Data Annotation and How to (not) Find Them

Add code
Nov 04, 2024
Viaarxiv icon

BRAIn: Bayesian Reward-conditioned Amortized Inference for natural language generation from feedback

Add code
Feb 04, 2024
Viaarxiv icon

Are Fairy Tales Fair? Analyzing Gender Bias in Temporal Narrative Event Chains of Children's Fairy Tales

Add code
May 26, 2023
Viaarxiv icon

NECE: Narrative Event Chain Extraction Toolkit

Add code
Aug 19, 2022
Figure 1 for NECE: Narrative Event Chain Extraction Toolkit
Figure 2 for NECE: Narrative Event Chain Extraction Toolkit
Figure 3 for NECE: Narrative Event Chain Extraction Toolkit
Figure 4 for NECE: Narrative Event Chain Extraction Toolkit
Viaarxiv icon

Non-Parallel Text Style Transfer with Self-Parallel Supervision

Add code
Apr 18, 2022
Figure 1 for Non-Parallel Text Style Transfer with Self-Parallel Supervision
Figure 2 for Non-Parallel Text Style Transfer with Self-Parallel Supervision
Figure 3 for Non-Parallel Text Style Transfer with Self-Parallel Supervision
Figure 4 for Non-Parallel Text Style Transfer with Self-Parallel Supervision
Viaarxiv icon

Can Model Compression Improve NLP Fairness

Add code
Jan 21, 2022
Figure 1 for Can Model Compression Improve NLP Fairness
Figure 2 for Can Model Compression Improve NLP Fairness
Figure 3 for Can Model Compression Improve NLP Fairness
Figure 4 for Can Model Compression Improve NLP Fairness
Viaarxiv icon

On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark

Add code
Oct 16, 2021
Figure 1 for On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark
Figure 2 for On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark
Figure 3 for On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark
Figure 4 for On the Safety of Conversational Models: Taxonomy, Dataset, and Benchmark
Viaarxiv icon

Mitigating Political Bias in Language Models Through Reinforced Calibration

Add code
Apr 30, 2021
Figure 1 for Mitigating Political Bias in Language Models Through Reinforced Calibration
Figure 2 for Mitigating Political Bias in Language Models Through Reinforced Calibration
Figure 3 for Mitigating Political Bias in Language Models Through Reinforced Calibration
Figure 4 for Mitigating Political Bias in Language Models Through Reinforced Calibration
Viaarxiv icon

Enhanced Offensive Language Detection Through Data Augmentation

Add code
Dec 05, 2020
Figure 1 for Enhanced Offensive Language Detection Through Data Augmentation
Figure 2 for Enhanced Offensive Language Detection Through Data Augmentation
Figure 3 for Enhanced Offensive Language Detection Through Data Augmentation
Figure 4 for Enhanced Offensive Language Detection Through Data Augmentation
Viaarxiv icon

Data Boost: Text Data Augmentation Through Reinforcement Learning Guided Conditional Generation

Add code
Dec 05, 2020
Figure 1 for Data Boost: Text Data Augmentation Through Reinforcement Learning Guided Conditional Generation
Figure 2 for Data Boost: Text Data Augmentation Through Reinforcement Learning Guided Conditional Generation
Figure 3 for Data Boost: Text Data Augmentation Through Reinforcement Learning Guided Conditional Generation
Figure 4 for Data Boost: Text Data Augmentation Through Reinforcement Learning Guided Conditional Generation
Viaarxiv icon