Picture for Arjun Ravi Kannan

Arjun Ravi Kannan

MBExplainer: Multilevel bandit-based explanations for downstream models with augmented graph embeddings

Add code
Nov 01, 2024
Viaarxiv icon

Mechanistic interpretability of large language models with applications to the financial services industry

Add code
Jul 15, 2024
Viaarxiv icon

MS-IMAP -- A Multi-Scale Graph Embedding Approach for Interpretable Manifold Learning

Add code
Jun 06, 2024
Figure 1 for MS-IMAP -- A Multi-Scale Graph Embedding Approach for Interpretable Manifold Learning
Figure 2 for MS-IMAP -- A Multi-Scale Graph Embedding Approach for Interpretable Manifold Learning
Figure 3 for MS-IMAP -- A Multi-Scale Graph Embedding Approach for Interpretable Manifold Learning
Figure 4 for MS-IMAP -- A Multi-Scale Graph Embedding Approach for Interpretable Manifold Learning
Viaarxiv icon

Approximation of group explainers with coalition structure using Monte Carlo sampling on the product space of coalitions and features

Add code
Mar 17, 2023
Viaarxiv icon

On marginal feature attributions of tree-based models

Add code
Feb 16, 2023
Viaarxiv icon

Model-agnostic bias mitigation methods with regressor distribution control for Wasserstein-based fairness metrics

Add code
Nov 19, 2021
Figure 1 for Model-agnostic bias mitigation methods with regressor distribution control for Wasserstein-based fairness metrics
Figure 2 for Model-agnostic bias mitigation methods with regressor distribution control for Wasserstein-based fairness metrics
Figure 3 for Model-agnostic bias mitigation methods with regressor distribution control for Wasserstein-based fairness metrics
Figure 4 for Model-agnostic bias mitigation methods with regressor distribution control for Wasserstein-based fairness metrics
Viaarxiv icon

Wasserstein-based fairness interpretability framework for machine learning models

Add code
Nov 06, 2020
Figure 1 for Wasserstein-based fairness interpretability framework for machine learning models
Figure 2 for Wasserstein-based fairness interpretability framework for machine learning models
Figure 3 for Wasserstein-based fairness interpretability framework for machine learning models
Figure 4 for Wasserstein-based fairness interpretability framework for machine learning models
Viaarxiv icon