Picture for Jie M. Zhang

Jie M. Zhang

University College London

LLMs Love Python: A Study of LLMs' Bias for Programming Languages and Libraries

Add code
Mar 21, 2025
Viaarxiv icon

Hallucination Detection in Large Language Models with Metamorphic Relations

Add code
Feb 20, 2025
Viaarxiv icon

Fairness Aware Reinforcement Learning via Proximal Policy Optimization

Add code
Feb 06, 2025
Viaarxiv icon

Diversity Drives Fairness: Ensemble of Higher Order Mutants for Intersectional Fairness of Machine Learning Software

Add code
Dec 11, 2024
Viaarxiv icon

Benchmarking Bias in Large Language Models during Role-Playing

Add code
Nov 01, 2024
Figure 1 for Benchmarking Bias in Large Language Models during Role-Playing
Figure 2 for Benchmarking Bias in Large Language Models during Role-Playing
Figure 3 for Benchmarking Bias in Large Language Models during Role-Playing
Figure 4 for Benchmarking Bias in Large Language Models during Role-Playing
Viaarxiv icon

Using Protected Attributes to Consider Fairness in Multi-Agent Systems

Add code
Oct 16, 2024
Viaarxiv icon

Effi-Code: Unleashing Code Efficiency in Language Models

Add code
Oct 14, 2024
Viaarxiv icon

Rethinking the Influence of Source Code on Test Case Generation

Add code
Sep 14, 2024
Viaarxiv icon

SOAP: Enhancing Efficiency of Generated Code via Self-Optimization

Add code
May 24, 2024
Figure 1 for SOAP: Enhancing Efficiency of Generated Code via Self-Optimization
Figure 2 for SOAP: Enhancing Efficiency of Generated Code via Self-Optimization
Figure 3 for SOAP: Enhancing Efficiency of Generated Code via Self-Optimization
Figure 4 for SOAP: Enhancing Efficiency of Generated Code via Self-Optimization
Viaarxiv icon

LLM-Powered Test Case Generation for Detecting Tricky Bugs

Add code
Apr 16, 2024
Viaarxiv icon