Picture for Jörg Schlötterer

Jörg Schlötterer

Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany, University of Duisburg-Essen, Essen, Germany, Cancer Research Center Cologne Essen

Towards Interpretable Deep Neural Networks for Tabular Data

Add code
Sep 10, 2025
Viaarxiv icon

The Impact of Annotator Personas on LLM Behavior Across the Perspectivism Spectrum

Add code
Aug 23, 2025
Viaarxiv icon

Tracing and Reversing Rank-One Model Edits

Add code
May 27, 2025
Viaarxiv icon

Invariant Learning with Annotation-free Environments

Add code
Apr 22, 2025
Figure 1 for Invariant Learning with Annotation-free Environments
Figure 2 for Invariant Learning with Annotation-free Environments
Viaarxiv icon

An XAI-based Analysis of Shortcut Learning in Neural Networks

Add code
Apr 22, 2025
Figure 1 for An XAI-based Analysis of Shortcut Learning in Neural Networks
Figure 2 for An XAI-based Analysis of Shortcut Learning in Neural Networks
Figure 3 for An XAI-based Analysis of Shortcut Learning in Neural Networks
Figure 4 for An XAI-based Analysis of Shortcut Learning in Neural Networks
Viaarxiv icon

Guiding LLMs to Generate High-Fidelity and High-Quality Counterfactual Explanations for Text Classification

Add code
Mar 06, 2025
Viaarxiv icon

Behavioral Analysis of Information Salience in Large Language Models

Add code
Feb 20, 2025
Figure 1 for Behavioral Analysis of Information Salience in Large Language Models
Figure 2 for Behavioral Analysis of Information Salience in Large Language Models
Figure 3 for Behavioral Analysis of Information Salience in Large Language Models
Figure 4 for Behavioral Analysis of Information Salience in Large Language Models
Viaarxiv icon

This looks like what? Challenges and Future Research Directions for Part-Prototype Models

Add code
Feb 13, 2025
Viaarxiv icon

Position: Editing Large Language Models Poses Serious Safety Risks

Add code
Feb 05, 2025
Figure 1 for Position: Editing Large Language Models Poses Serious Safety Risks
Figure 2 for Position: Editing Large Language Models Poses Serious Safety Risks
Figure 3 for Position: Editing Large Language Models Poses Serious Safety Risks
Figure 4 for Position: Editing Large Language Models Poses Serious Safety Risks
Viaarxiv icon

Funzac at CoMeDi Shared Task: Modeling Annotator Disagreement from Word-In-Context Perspectives

Add code
Jan 24, 2025
Figure 1 for Funzac at CoMeDi Shared Task: Modeling Annotator Disagreement from Word-In-Context Perspectives
Figure 2 for Funzac at CoMeDi Shared Task: Modeling Annotator Disagreement from Word-In-Context Perspectives
Figure 3 for Funzac at CoMeDi Shared Task: Modeling Annotator Disagreement from Word-In-Context Perspectives
Viaarxiv icon