Picture for Marcin Detyniecki

Marcin Detyniecki

SAKE: Steering Activations for Knowledge Editing

Add code
Mar 03, 2025
Viaarxiv icon

Controlled Model Debiasing through Minimal and Interpretable Updates

Add code
Feb 28, 2025
Viaarxiv icon

Regret-Optimized Portfolio Enhancement through Deep Reinforcement Learning and Future Looking Rewards

Add code
Feb 04, 2025
Viaarxiv icon

Post-processing fairness with minimal changes

Add code
Aug 27, 2024
Viaarxiv icon

Why do explanations fail? A typology and discussion on failures in XAI

Add code
May 22, 2024
Figure 1 for Why do explanations fail? A typology and discussion on failures in XAI
Figure 2 for Why do explanations fail? A typology and discussion on failures in XAI
Viaarxiv icon

OptiGrad: A Fair and more Efficient Price Elasticity Optimization via a Gradient Based Learning

Add code
Apr 16, 2024
Viaarxiv icon

On the Fairness ROAD: Robust Optimization for Adversarial Debiasing

Add code
Oct 27, 2023
Viaarxiv icon

Achieving Diversity in Counterfactual Explanations: a Review and Discussion

Add code
May 10, 2023
Viaarxiv icon

When Mitigating Bias is Unfair: A Comprehensive Study on the Impact of Bias Mitigation Algorithms

Add code
Feb 14, 2023
Viaarxiv icon

Integrating Prior Knowledge in Post-hoc Explanations

Add code
Apr 25, 2022
Figure 1 for Integrating Prior Knowledge in Post-hoc Explanations
Figure 2 for Integrating Prior Knowledge in Post-hoc Explanations
Figure 3 for Integrating Prior Knowledge in Post-hoc Explanations
Figure 4 for Integrating Prior Knowledge in Post-hoc Explanations
Viaarxiv icon