Picture for Zekun Wu

Zekun Wu

Saarland University, Germany

Bias Amplification: Language Models as Increasingly Biased Media

Add code
Oct 19, 2024
Viaarxiv icon

Assessing Bias in Metric Models for LLM Open-Ended Generation Bias Benchmarks

Add code
Oct 14, 2024
Viaarxiv icon

HEARTS: A Holistic Framework for Explainable, Sustainable and Robust Text Stereotype Detection

Add code
Sep 17, 2024
Figure 1 for HEARTS: A Holistic Framework for Explainable, Sustainable and Robust Text Stereotype Detection
Figure 2 for HEARTS: A Holistic Framework for Explainable, Sustainable and Robust Text Stereotype Detection
Figure 3 for HEARTS: A Holistic Framework for Explainable, Sustainable and Robust Text Stereotype Detection
Figure 4 for HEARTS: A Holistic Framework for Explainable, Sustainable and Robust Text Stereotype Detection
Viaarxiv icon

SAGED: A Holistic Bias-Benchmarking Pipeline for Language Models with Customisable Fairness Calibration

Add code
Sep 17, 2024
Figure 1 for SAGED: A Holistic Bias-Benchmarking Pipeline for Language Models with Customisable Fairness Calibration
Figure 2 for SAGED: A Holistic Bias-Benchmarking Pipeline for Language Models with Customisable Fairness Calibration
Figure 3 for SAGED: A Holistic Bias-Benchmarking Pipeline for Language Models with Customisable Fairness Calibration
Figure 4 for SAGED: A Holistic Bias-Benchmarking Pipeline for Language Models with Customisable Fairness Calibration
Viaarxiv icon

THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models

Add code
Sep 17, 2024
Viaarxiv icon

From Text to Emoji: How PEFT-Driven Personality Manipulation Unleashes the Emoji Potential in LLMs

Add code
Sep 16, 2024
Viaarxiv icon

JobFair: A Framework for Benchmarking Gender Hiring Bias in Large Language Models

Add code
Jun 17, 2024
Figure 1 for JobFair: A Framework for Benchmarking Gender Hiring Bias in Large Language Models
Figure 2 for JobFair: A Framework for Benchmarking Gender Hiring Bias in Large Language Models
Figure 3 for JobFair: A Framework for Benchmarking Gender Hiring Bias in Large Language Models
Figure 4 for JobFair: A Framework for Benchmarking Gender Hiring Bias in Large Language Models
Viaarxiv icon

Enhancing Saliency Prediction in Monitoring Tasks: The Role of Visual Highlights

Add code
May 15, 2024
Figure 1 for Enhancing Saliency Prediction in Monitoring Tasks: The Role of Visual Highlights
Figure 2 for Enhancing Saliency Prediction in Monitoring Tasks: The Role of Visual Highlights
Figure 3 for Enhancing Saliency Prediction in Monitoring Tasks: The Role of Visual Highlights
Viaarxiv icon

Shifting Focus with HCEye: Exploring the Dynamics of Visual Highlighting and Cognitive Load on User Attention and Saliency Prediction

Add code
May 02, 2024
Viaarxiv icon

Auditing Large Language Models for Enhanced Text-Based Stereotype Detection and Probing-Based Bias Evaluation

Add code
Apr 02, 2024
Viaarxiv icon