Picture for Hangzhi Guo

Hangzhi Guo

The Reopening of Pandora's Box: Analyzing the Role of LLMs in the Evolving Battle Against AI-Generated Fake News

Add code
Oct 25, 2024
Viaarxiv icon

Hey GPT, Can You be More Racist? Analysis from Crowdsourced Attempts to Elicit Biased Content from Generative AI

Add code
Oct 20, 2024
Viaarxiv icon

Watermarking Counterfactual Explanations

Add code
May 29, 2024
Viaarxiv icon

A Taxonomy of Rater Disagreements: Surveying Challenges & Opportunities from the Perspective of Annotating Online Toxicity

Add code
Nov 07, 2023
Figure 1 for A Taxonomy of Rater Disagreements: Surveying Challenges & Opportunities from the Perspective of Annotating Online Toxicity
Figure 2 for A Taxonomy of Rater Disagreements: Surveying Challenges & Opportunities from the Perspective of Annotating Online Toxicity
Viaarxiv icon

RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model

Add code
Jun 01, 2022
Figure 1 for RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model
Figure 2 for RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model
Figure 3 for RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model
Figure 4 for RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model
Viaarxiv icon

CounterNet: End-to-End Training of Counterfactual Aware Predictions

Add code
Sep 15, 2021
Figure 1 for CounterNet: End-to-End Training of Counterfactual Aware Predictions
Figure 2 for CounterNet: End-to-End Training of Counterfactual Aware Predictions
Figure 3 for CounterNet: End-to-End Training of Counterfactual Aware Predictions
Figure 4 for CounterNet: End-to-End Training of Counterfactual Aware Predictions
Viaarxiv icon