Picture for Ilya Feige

Ilya Feige

Task-specific experimental design for treatment effect estimation

Add code
Jun 08, 2023
Viaarxiv icon

Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy

Add code
Oct 23, 2020
Figure 1 for Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy
Figure 2 for Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy
Figure 3 for Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy
Figure 4 for Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy
Viaarxiv icon

Explainability for fair machine learning

Add code
Oct 14, 2020
Figure 1 for Explainability for fair machine learning
Figure 2 for Explainability for fair machine learning
Figure 3 for Explainability for fair machine learning
Figure 4 for Explainability for fair machine learning
Viaarxiv icon

Human-interpretable model explainability on high-dimensional data

Add code
Oct 14, 2020
Figure 1 for Human-interpretable model explainability on high-dimensional data
Figure 2 for Human-interpretable model explainability on high-dimensional data
Figure 3 for Human-interpretable model explainability on high-dimensional data
Figure 4 for Human-interpretable model explainability on high-dimensional data
Viaarxiv icon

Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders

Add code
Oct 07, 2020
Figure 1 for Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders
Figure 2 for Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders
Figure 3 for Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders
Figure 4 for Learning Deep-Latent Hierarchies by Stacking Wasserstein Autoencoders
Viaarxiv icon

Learning disentangled representations with the Wasserstein Autoencoder

Add code
Oct 07, 2020
Figure 1 for Learning disentangled representations with the Wasserstein Autoencoder
Figure 2 for Learning disentangled representations with the Wasserstein Autoencoder
Figure 3 for Learning disentangled representations with the Wasserstein Autoencoder
Figure 4 for Learning disentangled representations with the Wasserstein Autoencoder
Viaarxiv icon

Shapley-based explainability on the data manifold

Add code
Jun 01, 2020
Figure 1 for Shapley-based explainability on the data manifold
Figure 2 for Shapley-based explainability on the data manifold
Figure 3 for Shapley-based explainability on the data manifold
Figure 4 for Shapley-based explainability on the data manifold
Viaarxiv icon

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

Add code
Oct 14, 2019
Figure 1 for Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
Figure 2 for Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
Figure 3 for Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
Figure 4 for Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
Viaarxiv icon

Parenting: Safe Reinforcement Learning from Human Input

Add code
Feb 18, 2019
Figure 1 for Parenting: Safe Reinforcement Learning from Human Input
Figure 2 for Parenting: Safe Reinforcement Learning from Human Input
Figure 3 for Parenting: Safe Reinforcement Learning from Human Input
Figure 4 for Parenting: Safe Reinforcement Learning from Human Input
Viaarxiv icon

Invariant-equivariant representation learning for multi-class data

Add code
Feb 08, 2019
Figure 1 for Invariant-equivariant representation learning for multi-class data
Figure 2 for Invariant-equivariant representation learning for multi-class data
Figure 3 for Invariant-equivariant representation learning for multi-class data
Figure 4 for Invariant-equivariant representation learning for multi-class data
Viaarxiv icon