Picture for Elizabeth M. Daly

Elizabeth M. Daly

Granite Guardian

Add code
Dec 10, 2024
Viaarxiv icon

Usage Governance Advisor: from Intent to AI Governance

Add code
Dec 02, 2024
Viaarxiv icon

Evaluating the Prompt Steerability of Large Language Models

Add code
Nov 19, 2024
Viaarxiv icon

Black-box Uncertainty Quantification Method for LLM-as-a-Judge

Add code
Oct 15, 2024
Figure 1 for Black-box Uncertainty Quantification Method for LLM-as-a-Judge
Figure 2 for Black-box Uncertainty Quantification Method for LLM-as-a-Judge
Figure 3 for Black-box Uncertainty Quantification Method for LLM-as-a-Judge
Figure 4 for Black-box Uncertainty Quantification Method for LLM-as-a-Judge
Viaarxiv icon

Language Models in Dialogue: Conversational Maxims for Human-AI Interactions

Add code
Mar 22, 2024
Figure 1 for Language Models in Dialogue: Conversational Maxims for Human-AI Interactions
Figure 2 for Language Models in Dialogue: Conversational Maxims for Human-AI Interactions
Figure 3 for Language Models in Dialogue: Conversational Maxims for Human-AI Interactions
Figure 4 for Language Models in Dialogue: Conversational Maxims for Human-AI Interactions
Viaarxiv icon

Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations

Add code
Mar 09, 2024
Viaarxiv icon

Interpretable Differencing of Machine Learning Models

Add code
Jun 13, 2023
Viaarxiv icon

AutoDOViz: Human-Centered Automation for Decision Optimization

Add code
Feb 19, 2023
Viaarxiv icon

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Add code
Nov 02, 2022
Viaarxiv icon

User Driven Model Adjustment via Boolean Rule Explanations

Add code
Mar 28, 2022
Figure 1 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 2 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 3 for User Driven Model Adjustment via Boolean Rule Explanations
Figure 4 for User Driven Model Adjustment via Boolean Rule Explanations
Viaarxiv icon