Picture for Moninder Singh

Moninder Singh

Reasoning about concepts with LLMs: Inconsistencies abound

Add code
May 30, 2024
Viaarxiv icon

Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations

Add code
Mar 08, 2024
Viaarxiv icon

Ranking Large Language Models without Ground Truth

Add code
Feb 21, 2024
Figure 1 for Ranking Large Language Models without Ground Truth
Figure 2 for Ranking Large Language Models without Ground Truth
Figure 3 for Ranking Large Language Models without Ground Truth
Figure 4 for Ranking Large Language Models without Ground Truth
Viaarxiv icon

SocialStigmaQA: A Benchmark to Uncover Stigma Amplification in Generative Language Models

Add code
Dec 27, 2023
Viaarxiv icon

Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions

Add code
Feb 17, 2023
Viaarxiv icon

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Add code
Nov 02, 2022
Viaarxiv icon

Anomaly Attribution with Likelihood Compensation

Add code
Aug 23, 2022
Figure 1 for Anomaly Attribution with Likelihood Compensation
Figure 2 for Anomaly Attribution with Likelihood Compensation
Figure 3 for Anomaly Attribution with Likelihood Compensation
Figure 4 for Anomaly Attribution with Likelihood Compensation
Viaarxiv icon

Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations

Add code
May 08, 2022
Figure 1 for Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations
Figure 2 for Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations
Figure 3 for Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations
Figure 4 for Write It Like You See It: Detectable Differences in Clinical Notes By Race Lead To Differential Model Recommendations
Viaarxiv icon

Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets

Add code
Dec 07, 2021
Figure 1 for Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets
Figure 2 for Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets
Figure 3 for Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets
Figure 4 for Ground-Truth, Whose Truth? -- Examining the Challenges with Annotating Toxic Text Datasets
Viaarxiv icon

An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness

Add code
Sep 29, 2021
Figure 1 for An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness
Figure 2 for An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness
Figure 3 for An Empirical Study of Accuracy, Fairness, Explainability, Distributional Robustness, and Adversarial Robustness
Viaarxiv icon