Picture for Michael Hind

Michael Hind

Granite Guardian

Add code
Dec 10, 2024
Viaarxiv icon

Usage Governance Advisor: from Intent to AI Governance

Add code
Dec 02, 2024
Viaarxiv icon

BenchmarkCards: Large Language Model and Risk Reporting

Add code
Oct 16, 2024
Figure 1 for BenchmarkCards: Large Language Model and Risk Reporting
Figure 2 for BenchmarkCards: Large Language Model and Risk Reporting
Figure 3 for BenchmarkCards: Large Language Model and Risk Reporting
Viaarxiv icon

Detectors for Safe and Reliable LLMs: Implementations, Uses, and Limitations

Add code
Mar 09, 2024
Viaarxiv icon

Quantitative AI Risk Assessments: Opportunities and Challenges

Add code
Sep 13, 2022
Viaarxiv icon

Evaluating a Methodology for Increasing AI Transparency: A Case Study

Add code
Jan 24, 2022
Figure 1 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Figure 2 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Figure 3 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Figure 4 for Evaluating a Methodology for Increasing AI Transparency: A Case Study
Viaarxiv icon

AI Explainability 360: Impact and Design

Add code
Sep 24, 2021
Figure 1 for AI Explainability 360: Impact and Design
Figure 2 for AI Explainability 360: Impact and Design
Figure 3 for AI Explainability 360: Impact and Design
Figure 4 for AI Explainability 360: Impact and Design
Viaarxiv icon

A Methodology for Creating AI FactSheets

Add code
Jun 28, 2020
Figure 1 for A Methodology for Creating AI FactSheets
Figure 2 for A Methodology for Creating AI FactSheets
Figure 3 for A Methodology for Creating AI FactSheets
Figure 4 for A Methodology for Creating AI FactSheets
Viaarxiv icon

Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness

Add code
Jan 13, 2020
Figure 1 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 2 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 3 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Figure 4 for Consumer-Driven Explanations for Machine Learning Decisions: An Empirical Study of Robustness
Viaarxiv icon

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

Add code
Sep 14, 2019
Figure 1 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 2 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 3 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Figure 4 for One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Viaarxiv icon