Picture for Bettina Berendt

Bettina Berendt

Articulation Work and Tinkering for Fairness in Machine Learning

Add code
Jul 23, 2024
Viaarxiv icon

Silencing the Risk, Not the Whistle: A Semi-automated Text Sanitization Tool for Mitigating the Risk of Whistleblower Re-Identification

Add code
May 02, 2024
Viaarxiv icon

Tik-to-Tok: Translating Language Models One Token at a Time: An Embedding Initialization Strategy for Efficient Language Adaptation

Add code
Oct 05, 2023
Figure 1 for Tik-to-Tok: Translating Language Models One Token at a Time: An Embedding Initialization Strategy for Efficient Language Adaptation
Figure 2 for Tik-to-Tok: Translating Language Models One Token at a Time: An Embedding Initialization Strategy for Efficient Language Adaptation
Figure 3 for Tik-to-Tok: Translating Language Models One Token at a Time: An Embedding Initialization Strategy for Efficient Language Adaptation
Figure 4 for Tik-to-Tok: Translating Language Models One Token at a Time: An Embedding Initialization Strategy for Efficient Language Adaptation
Viaarxiv icon

Bias, diversity, and challenges to fairness in classification and automated text analysis. From libraries to AI and back

Add code
Mar 07, 2023
Viaarxiv icon

Domain Adaptive Decision Trees: Implications for Accuracy and Fairness

Add code
Feb 27, 2023
Viaarxiv icon

How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification

Add code
Jan 30, 2023
Figure 1 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Figure 2 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Figure 3 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Figure 4 for How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification
Viaarxiv icon

Political representation bias in DBpedia and Wikidata as a challenge for downstream processing

Add code
Dec 29, 2022
Figure 1 for Political representation bias in DBpedia and Wikidata as a challenge for downstream processing
Figure 2 for Political representation bias in DBpedia and Wikidata as a challenge for downstream processing
Figure 3 for Political representation bias in DBpedia and Wikidata as a challenge for downstream processing
Viaarxiv icon

RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use

Add code
Nov 15, 2022
Viaarxiv icon

FairDistillation: Mitigating Stereotyping in Language Models

Add code
Jul 10, 2022
Figure 1 for FairDistillation: Mitigating Stereotyping in Language Models
Figure 2 for FairDistillation: Mitigating Stereotyping in Language Models
Figure 3 for FairDistillation: Mitigating Stereotyping in Language Models
Figure 4 for FairDistillation: Mitigating Stereotyping in Language Models
Viaarxiv icon

RobBERTje: a Distilled Dutch BERT Model

Add code
Apr 28, 2022
Figure 1 for RobBERTje: a Distilled Dutch BERT Model
Figure 2 for RobBERTje: a Distilled Dutch BERT Model
Figure 3 for RobBERTje: a Distilled Dutch BERT Model
Figure 4 for RobBERTje: a Distilled Dutch BERT Model
Viaarxiv icon