Picture for Bhavya Ghai

Bhavya Ghai

Towards Fair and Explainable AI using a Human-Centered AI Approach

Add code
Jun 12, 2023
Viaarxiv icon

D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias

Add code
Aug 10, 2022
Figure 1 for D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias
Figure 2 for D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias
Figure 3 for D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias
Figure 4 for D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias
Viaarxiv icon

Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

Add code
Feb 08, 2022
Figure 1 for Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions
Figure 2 for Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions
Figure 3 for Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions
Figure 4 for Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions
Viaarxiv icon

Fluent: An AI Augmented Writing Tool for People who Stutter

Add code
Aug 23, 2021
Figure 1 for Fluent: An AI Augmented Writing Tool for People who Stutter
Figure 2 for Fluent: An AI Augmented Writing Tool for People who Stutter
Figure 3 for Fluent: An AI Augmented Writing Tool for People who Stutter
Figure 4 for Fluent: An AI Augmented Writing Tool for People who Stutter
Viaarxiv icon

WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings

Add code
Mar 05, 2021
Figure 1 for WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings
Figure 2 for WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings
Figure 3 for WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings
Figure 4 for WordBias: An Interactive Visual Tool for Discovering Intersectional Biases Encoded in Word Embeddings
Viaarxiv icon

Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation

Add code
Sep 06, 2020
Figure 1 for Active Learning++: Incorporating Annotator's Rationale using Local Model Explanation
Viaarxiv icon

Measuring Social Biases of Crowd Workers using Counterfactual Queries

Add code
Apr 04, 2020
Figure 1 for Measuring Social Biases of Crowd Workers using Counterfactual Queries
Viaarxiv icon

Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience

Add code
Jan 31, 2020
Figure 1 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 2 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 3 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Figure 4 for Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Viaarxiv icon

Does Speech enhancement of publicly available data help build robust Speech Recognition Systems?

Add code
Nov 20, 2019
Figure 1 for Does Speech enhancement of publicly available data help build robust Speech Recognition Systems?
Figure 2 for Does Speech enhancement of publicly available data help build robust Speech Recognition Systems?
Viaarxiv icon