Picture for Gesina Schwalbe

Gesina Schwalbe

Benchmarking Vision Foundation Models for Input Monitoring in Autonomous Driving

Add code
Jan 14, 2025
Viaarxiv icon

Unveiling Ontological Commitment in Multi-Modal Foundation Models

Add code
Sep 25, 2024
Viaarxiv icon

Investigating Calibration and Corruption Robustness of Post-hoc Pruned Perception CNNs: An Image Classification Benchmark Study

Add code
May 31, 2024
Viaarxiv icon

The Anatomy of Adversarial Attacks: Concept-based XAI Dissection

Add code
Mar 25, 2024
Viaarxiv icon

GCPV: Guided Concept Projection Vectors for the Explainable Inspection of CNN Feature Spaces

Add code
Nov 24, 2023
Viaarxiv icon

Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes

Add code
Sep 08, 2023
Figure 1 for Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes
Figure 2 for Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes
Figure 3 for Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes
Figure 4 for Have We Ever Encountered This Before? Retrieving Out-of-Distribution Road Obstacles from Driving Scenes
Viaarxiv icon

Quantified Semantic Comparison of Convolutional Neural Networks

Add code
Apr 30, 2023
Figure 1 for Quantified Semantic Comparison of Convolutional Neural Networks
Figure 2 for Quantified Semantic Comparison of Convolutional Neural Networks
Figure 3 for Quantified Semantic Comparison of Convolutional Neural Networks
Figure 4 for Quantified Semantic Comparison of Convolutional Neural Networks
Viaarxiv icon

Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability

Add code
Apr 28, 2023
Figure 1 for Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability
Figure 2 for Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability
Figure 3 for Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability
Figure 4 for Evaluating the Stability of Semantic Concept Representations in CNNs for Robust Explainability
Viaarxiv icon

Concept Embedding Analysis: A Review

Add code
Mar 25, 2022
Figure 1 for Concept Embedding Analysis: A Review
Figure 2 for Concept Embedding Analysis: A Review
Figure 3 for Concept Embedding Analysis: A Review
Figure 4 for Concept Embedding Analysis: A Review
Viaarxiv icon

Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks in Perception Tasks

Add code
Jan 03, 2022
Figure 1 for Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks in Perception Tasks
Figure 2 for Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks in Perception Tasks
Figure 3 for Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks in Perception Tasks
Figure 4 for Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks in Perception Tasks
Viaarxiv icon