Picture for Sanjay Kariyappa

Sanjay Kariyappa

Progressive Inference: Explaining Decoder-Only Sequence Classification Models Using Intermediate Predictions

Add code
Jun 03, 2024
Viaarxiv icon

Privacy-Preserving Algorithmic Recourse

Add code
Nov 23, 2023
Figure 1 for Privacy-Preserving Algorithmic Recourse
Figure 2 for Privacy-Preserving Algorithmic Recourse
Figure 3 for Privacy-Preserving Algorithmic Recourse
Figure 4 for Privacy-Preserving Algorithmic Recourse
Viaarxiv icon

SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features

Add code
Jul 10, 2023
Viaarxiv icon

Information Flow Control in Machine Learning through Modular Model Architecture

Add code
Jun 05, 2023
Viaarxiv icon

Bounding the Invertibility of Privacy-preserving Instance Encoding using Fisher Information

Add code
May 06, 2023
Viaarxiv icon

Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information

Add code
Sep 21, 2022
Figure 1 for Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information
Figure 2 for Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information
Figure 3 for Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information
Figure 4 for Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information
Viaarxiv icon

Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis

Add code
Sep 12, 2022
Figure 1 for Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis
Figure 2 for Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis
Figure 3 for Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis
Figure 4 for Cocktail Party Attack: Breaking Aggregation-Based Privacy in Federated Learning using Independent Component Analysis
Viaarxiv icon

Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning

Add code
Nov 25, 2021
Figure 1 for Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning
Figure 2 for Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning
Figure 3 for Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning
Figure 4 for Gradient Inversion Attack: Leaking Private Labels in Two-Party Split Learning
Viaarxiv icon

Enabling Inference Privacy with Adaptive Noise Injection

Add code
Apr 06, 2021
Figure 1 for Enabling Inference Privacy with Adaptive Noise Injection
Figure 2 for Enabling Inference Privacy with Adaptive Noise Injection
Figure 3 for Enabling Inference Privacy with Adaptive Noise Injection
Figure 4 for Enabling Inference Privacy with Adaptive Noise Injection
Viaarxiv icon

MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation

Add code
May 06, 2020
Figure 1 for MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Figure 2 for MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Figure 3 for MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Figure 4 for MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Viaarxiv icon