Picture for Rahul Nair

Rahul Nair

Making Bias Amplification in Balanced Datasets Directional and Interpretable

Add code
Dec 15, 2024
Viaarxiv icon

Classification Drives Geographic Bias in Street Scene Segmentation

Add code
Dec 15, 2024
Viaarxiv icon

Black-box Uncertainty Quantification Method for LLM-as-a-Judge

Add code
Oct 15, 2024
Figure 1 for Black-box Uncertainty Quantification Method for LLM-as-a-Judge
Figure 2 for Black-box Uncertainty Quantification Method for LLM-as-a-Judge
Figure 3 for Black-box Uncertainty Quantification Method for LLM-as-a-Judge
Figure 4 for Black-box Uncertainty Quantification Method for LLM-as-a-Judge
Viaarxiv icon

On Efficient and Statistical Quality Estimation for Data Annotation

Add code
May 20, 2024
Viaarxiv icon

Ranking Large Language Models without Ground Truth

Add code
Feb 21, 2024
Figure 1 for Ranking Large Language Models without Ground Truth
Figure 2 for Ranking Large Language Models without Ground Truth
Figure 3 for Ranking Large Language Models without Ground Truth
Figure 4 for Ranking Large Language Models without Ground Truth
Viaarxiv icon

Explaining Knock-on Effects of Bias Mitigation

Add code
Dec 01, 2023
Viaarxiv icon

Iterative Reward Shaping using Human Feedback for Correcting Reward Misspecification

Add code
Aug 30, 2023
Viaarxiv icon

Co-creating a globally interpretable model with human input

Add code
Jun 23, 2023
Viaarxiv icon

Interpretable Differencing of Machine Learning Models

Add code
Jun 13, 2023
Viaarxiv icon

AutoDOViz: Human-Centered Automation for Decision Optimization

Add code
Feb 19, 2023
Viaarxiv icon