Picture for Lisa Jöckel

Lisa Jöckel

Operationalizing Assurance Cases for Data Scientists: A Showcase of Concepts and Tooling in the Context of Test Data Quality for Machine Learning

Add code
Dec 08, 2023
Viaarxiv icon

Uncertainty Wrapper in the medical domain: Establishing transparent uncertainty quantification for opaque machine learning models in practice

Add code
Nov 09, 2023
Viaarxiv icon

Timeseries-aware Uncertainty Wrappers for Uncertainty Quantification of Information-Fusion-Enhanced AI Models based on Machine Learning

Add code
May 24, 2023
Figure 1 for Timeseries-aware Uncertainty Wrappers for Uncertainty Quantification of Information-Fusion-Enhanced AI Models based on Machine Learning
Figure 2 for Timeseries-aware Uncertainty Wrappers for Uncertainty Quantification of Information-Fusion-Enhanced AI Models based on Machine Learning
Figure 3 for Timeseries-aware Uncertainty Wrappers for Uncertainty Quantification of Information-Fusion-Enhanced AI Models based on Machine Learning
Figure 4 for Timeseries-aware Uncertainty Wrappers for Uncertainty Quantification of Information-Fusion-Enhanced AI Models based on Machine Learning
Viaarxiv icon

Architectural patterns for handling runtime uncertainty of data-driven models in safety-critical perception

Add code
Jun 14, 2022
Figure 1 for Architectural patterns for handling runtime uncertainty of data-driven models in safety-critical perception
Figure 2 for Architectural patterns for handling runtime uncertainty of data-driven models in safety-critical perception
Figure 3 for Architectural patterns for handling runtime uncertainty of data-driven models in safety-critical perception
Figure 4 for Architectural patterns for handling runtime uncertainty of data-driven models in safety-critical perception
Viaarxiv icon

Integrating Testing and Operation-related Quantitative Evidences in Assurance Cases to Argue Safety of Data-Driven AI/ML Components

Add code
Feb 10, 2022
Figure 1 for Integrating Testing and Operation-related Quantitative Evidences in Assurance Cases to Argue Safety of Data-Driven AI/ML Components
Figure 2 for Integrating Testing and Operation-related Quantitative Evidences in Assurance Cases to Argue Safety of Data-Driven AI/ML Components
Viaarxiv icon

A Study on Mitigating Hard Boundaries of Decision-Tree-based Uncertainty Estimates for AI Models

Add code
Jan 10, 2022
Figure 1 for A Study on Mitigating Hard Boundaries of Decision-Tree-based Uncertainty Estimates for AI Models
Figure 2 for A Study on Mitigating Hard Boundaries of Decision-Tree-based Uncertainty Estimates for AI Models
Figure 3 for A Study on Mitigating Hard Boundaries of Decision-Tree-based Uncertainty Estimates for AI Models
Figure 4 for A Study on Mitigating Hard Boundaries of Decision-Tree-based Uncertainty Estimates for AI Models
Viaarxiv icon

Towards a Common Testing Terminology for Software Engineering and Artificial Intelligence Experts

Add code
Sep 06, 2021
Figure 1 for Towards a Common Testing Terminology for Software Engineering and Artificial Intelligence Experts
Figure 2 for Towards a Common Testing Terminology for Software Engineering and Artificial Intelligence Experts
Viaarxiv icon