Picture for Bogdan Kulynych

Bogdan Kulynych

Attack-Aware Noise Calibration for Differential Privacy

Add code
Jul 02, 2024
Viaarxiv icon

The Fundamental Limits of Least-Privilege Learning

Add code
Feb 19, 2024
Figure 1 for The Fundamental Limits of Least-Privilege Learning
Figure 2 for The Fundamental Limits of Least-Privilege Learning
Figure 3 for The Fundamental Limits of Least-Privilege Learning
Figure 4 for The Fundamental Limits of Least-Privilege Learning
Viaarxiv icon

Prediction without Preclusion: Recourse Verification with Reachable Sets

Add code
Aug 24, 2023
Viaarxiv icon

Arbitrary Decisions are a Hidden Cost of Differentially-Private Training

Add code
Feb 28, 2023
Figure 1 for Arbitrary Decisions are a Hidden Cost of Differentially-Private Training
Figure 2 for Arbitrary Decisions are a Hidden Cost of Differentially-Private Training
Figure 3 for Arbitrary Decisions are a Hidden Cost of Differentially-Private Training
Figure 4 for Arbitrary Decisions are a Hidden Cost of Differentially-Private Training
Viaarxiv icon

Adversarial Robustness for Tabular Data through Cost and Utility Awareness

Add code
Aug 27, 2022
Figure 1 for Adversarial Robustness for Tabular Data through Cost and Utility Awareness
Figure 2 for Adversarial Robustness for Tabular Data through Cost and Utility Awareness
Figure 3 for Adversarial Robustness for Tabular Data through Cost and Utility Awareness
Figure 4 for Adversarial Robustness for Tabular Data through Cost and Utility Awareness
Viaarxiv icon

What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning

Add code
Apr 07, 2022
Figure 1 for What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning
Figure 2 for What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning
Figure 3 for What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning
Figure 4 for What You See is What You Get: Distributional Generalization for Algorithm Design in Deep Learning
Viaarxiv icon

Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks

Add code
Jul 11, 2021
Figure 1 for Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks
Figure 2 for Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks
Viaarxiv icon

Exploring Data Pipelines through the Process Lens: a Reference Model forComputer Vision

Add code
Jul 05, 2021
Figure 1 for Exploring Data Pipelines through the Process Lens: a Reference Model forComputer Vision
Figure 2 for Exploring Data Pipelines through the Process Lens: a Reference Model forComputer Vision
Figure 3 for Exploring Data Pipelines through the Process Lens: a Reference Model forComputer Vision
Figure 4 for Exploring Data Pipelines through the Process Lens: a Reference Model forComputer Vision
Viaarxiv icon

Disparate Vulnerability: on the Unfairness of Privacy Attacks Against Machine Learning

Add code
Jun 02, 2019
Figure 1 for Disparate Vulnerability: on the Unfairness of Privacy Attacks Against Machine Learning
Figure 2 for Disparate Vulnerability: on the Unfairness of Privacy Attacks Against Machine Learning
Viaarxiv icon

Questioning the assumptions behind fairness solutions

Add code
Nov 27, 2018
Viaarxiv icon