Picture for Maram Akila

Maram Akila

Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Sankt Augustin, Germany

Assessing Systematic Weaknesses of DNNs using Counterfactuals

Add code
Aug 03, 2023
Figure 1 for Assessing Systematic Weaknesses of DNNs using Counterfactuals
Figure 2 for Assessing Systematic Weaknesses of DNNs using Counterfactuals
Figure 3 for Assessing Systematic Weaknesses of DNNs using Counterfactuals
Figure 4 for Assessing Systematic Weaknesses of DNNs using Counterfactuals
Viaarxiv icon

Guideline for Trustworthy Artificial Intelligence -- AI Assessment Catalog

Add code
Jun 20, 2023
Viaarxiv icon

A Survey on Uncertainty Toolkits for Deep Learning

Add code
May 02, 2022
Figure 1 for A Survey on Uncertainty Toolkits for Deep Learning
Figure 2 for A Survey on Uncertainty Toolkits for Deep Learning
Viaarxiv icon

Tailored Uncertainty Estimation for Deep Learning Systems

Add code
Apr 29, 2022
Figure 1 for Tailored Uncertainty Estimation for Deep Learning Systems
Figure 2 for Tailored Uncertainty Estimation for Deep Learning Systems
Figure 3 for Tailored Uncertainty Estimation for Deep Learning Systems
Figure 4 for Tailored Uncertainty Estimation for Deep Learning Systems
Viaarxiv icon

Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis

Add code
Jun 10, 2021
Figure 1 for Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
Figure 2 for Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
Figure 3 for Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
Figure 4 for Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
Viaarxiv icon

Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety

Add code
Apr 29, 2021
Viaarxiv icon

Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities

Add code
Apr 22, 2021
Figure 1 for Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
Figure 2 for Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
Figure 3 for Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
Figure 4 for Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
Viaarxiv icon

Plants Don't Walk on the Street: Common-Sense Reasoning for Reliable Semantic Segmentation

Add code
Apr 19, 2021
Figure 1 for Plants Don't Walk on the Street: Common-Sense Reasoning for Reliable Semantic Segmentation
Figure 2 for Plants Don't Walk on the Street: Common-Sense Reasoning for Reliable Semantic Segmentation
Figure 3 for Plants Don't Walk on the Street: Common-Sense Reasoning for Reliable Semantic Segmentation
Figure 4 for Plants Don't Walk on the Street: Common-Sense Reasoning for Reliable Semantic Segmentation
Viaarxiv icon

A Novel Regression Loss for Non-Parametric Uncertainty Optimization

Add code
Jan 07, 2021
Figure 1 for A Novel Regression Loss for Non-Parametric Uncertainty Optimization
Figure 2 for A Novel Regression Loss for Non-Parametric Uncertainty Optimization
Figure 3 for A Novel Regression Loss for Non-Parametric Uncertainty Optimization
Figure 4 for A Novel Regression Loss for Non-Parametric Uncertainty Optimization
Viaarxiv icon

Second-Moment Loss: A Novel Regression Objective for Improved Uncertainties

Add code
Dec 23, 2020
Figure 1 for Second-Moment Loss: A Novel Regression Objective for Improved Uncertainties
Figure 2 for Second-Moment Loss: A Novel Regression Objective for Improved Uncertainties
Figure 3 for Second-Moment Loss: A Novel Regression Objective for Improved Uncertainties
Figure 4 for Second-Moment Loss: A Novel Regression Objective for Improved Uncertainties
Viaarxiv icon