Picture for Ayanna Howard

Ayanna Howard

Mitigating Racial Biases in Toxic Language Detection with an Equity-Based Ensemble Framework

Add code
Sep 27, 2021
Figure 1 for Mitigating Racial Biases in Toxic Language Detection with an Equity-Based Ensemble Framework
Figure 2 for Mitigating Racial Biases in Toxic Language Detection with an Equity-Based Ensemble Framework
Figure 3 for Mitigating Racial Biases in Toxic Language Detection with an Equity-Based Ensemble Framework
Figure 4 for Mitigating Racial Biases in Toxic Language Detection with an Equity-Based Ensemble Framework
Viaarxiv icon

A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play

Add code
Jun 10, 2020
Figure 1 for A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play
Figure 2 for A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play
Figure 3 for A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play
Figure 4 for A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play
Viaarxiv icon

Does Removing Stereotype Priming Remove Bias? A Pilot Human-Robot Interaction Study

Add code
Jul 03, 2018
Figure 1 for Does Removing Stereotype Priming Remove Bias? A Pilot Human-Robot Interaction Study
Figure 2 for Does Removing Stereotype Priming Remove Bias? A Pilot Human-Robot Interaction Study
Figure 3 for Does Removing Stereotype Priming Remove Bias? A Pilot Human-Robot Interaction Study
Figure 4 for Does Removing Stereotype Priming Remove Bias? A Pilot Human-Robot Interaction Study
Viaarxiv icon