Picture for Levent Sagun

Levent Sagun

On the Role of Speech Data in Reducing Toxicity Detection Bias

Add code
Nov 12, 2024
Figure 1 for On the Role of Speech Data in Reducing Toxicity Detection Bias
Figure 2 for On the Role of Speech Data in Reducing Toxicity Detection Bias
Figure 3 for On the Role of Speech Data in Reducing Toxicity Detection Bias
Figure 4 for On the Role of Speech Data in Reducing Toxicity Detection Bias
Viaarxiv icon

The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models

Add code
Nov 06, 2024
Figure 1 for The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models
Figure 2 for The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models
Figure 3 for The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models
Figure 4 for The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models
Viaarxiv icon

Reassessing the Validity of Spurious Correlations Benchmarks

Add code
Sep 06, 2024
Figure 1 for Reassessing the Validity of Spurious Correlations Benchmarks
Figure 2 for Reassessing the Validity of Spurious Correlations Benchmarks
Figure 3 for Reassessing the Validity of Spurious Correlations Benchmarks
Figure 4 for Reassessing the Validity of Spurious Correlations Benchmarks
Viaarxiv icon

Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction

Add code
Sep 29, 2023
Figure 1 for Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction
Figure 2 for Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction
Figure 3 for Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction
Figure 4 for Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction
Viaarxiv icon

Weisfeiler and Lehman Go Measurement Modeling: Probing the Validity of the WL Test

Add code
Jul 11, 2023
Viaarxiv icon

Simplicity Bias Leads to Amplified Performance Disparities

Add code
Dec 13, 2022
Viaarxiv icon

Measuring and signing fairness as performance under multiple stakeholder distributions

Add code
Jul 20, 2022
Viaarxiv icon

Understanding out-of-distribution accuracies through quantifying difficulty of test samples

Add code
Mar 28, 2022
Figure 1 for Understanding out-of-distribution accuracies through quantifying difficulty of test samples
Figure 2 for Understanding out-of-distribution accuracies through quantifying difficulty of test samples
Figure 3 for Understanding out-of-distribution accuracies through quantifying difficulty of test samples
Figure 4 for Understanding out-of-distribution accuracies through quantifying difficulty of test samples
Viaarxiv icon

Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision

Add code
Feb 22, 2022
Figure 1 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Figure 2 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Figure 3 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Figure 4 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Viaarxiv icon

Fairness Indicators for Systematic Assessments of Visual Feature Extractors

Add code
Feb 15, 2022
Figure 1 for Fairness Indicators for Systematic Assessments of Visual Feature Extractors
Figure 2 for Fairness Indicators for Systematic Assessments of Visual Feature Extractors
Figure 3 for Fairness Indicators for Systematic Assessments of Visual Feature Extractors
Figure 4 for Fairness Indicators for Systematic Assessments of Visual Feature Extractors
Viaarxiv icon