Picture for Moritz Böhle

Moritz Böhle

B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable

Add code
Nov 01, 2024
Viaarxiv icon

Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery

Add code
Jul 19, 2024
Figure 1 for Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery
Figure 2 for Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery
Figure 3 for Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery
Figure 4 for Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery
Viaarxiv icon

Good Teachers Explain: Explanation-Enhanced Knowledge Distillation

Add code
Feb 05, 2024
Viaarxiv icon

B-cos Alignment for Inherently Interpretable CNNs and Vision Transformers

Add code
Jun 19, 2023
Viaarxiv icon

Temperature Schedules for Self-Supervised Contrastive Methods on Long-Tail Data

Add code
Mar 23, 2023
Viaarxiv icon

Better Understanding Differences in Attribution Methods via Systematic Evaluations

Add code
Mar 21, 2023
Viaarxiv icon

Using Explanations to Guide Models

Add code
Mar 21, 2023
Viaarxiv icon

Holistically Explainable Vision Transformers

Add code
Jan 20, 2023
Viaarxiv icon

Towards Better Understanding Attribution Methods

Add code
May 20, 2022
Figure 1 for Towards Better Understanding Attribution Methods
Figure 2 for Towards Better Understanding Attribution Methods
Figure 3 for Towards Better Understanding Attribution Methods
Figure 4 for Towards Better Understanding Attribution Methods
Viaarxiv icon

B-cos Networks: Alignment is All We Need for Interpretability

Add code
May 20, 2022
Figure 1 for B-cos Networks: Alignment is All We Need for Interpretability
Figure 2 for B-cos Networks: Alignment is All We Need for Interpretability
Figure 3 for B-cos Networks: Alignment is All We Need for Interpretability
Figure 4 for B-cos Networks: Alignment is All We Need for Interpretability
Viaarxiv icon