Picture for Eitan Borgnia

Eitan Borgnia

What do Vision Transformers Learn? A Visual Exploration

Add code
Dec 13, 2022
Viaarxiv icon

Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries

Add code
Oct 19, 2022
Figure 1 for Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries
Figure 2 for Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries
Figure 3 for Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries
Figure 4 for Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries
Viaarxiv icon

Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise

Add code
Aug 19, 2022
Figure 1 for Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise
Figure 2 for Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise
Figure 3 for Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise
Figure 4 for Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise
Viaarxiv icon

End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking

Add code
Feb 15, 2022
Figure 1 for End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking
Figure 2 for End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking
Figure 3 for End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking
Figure 4 for End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking
Viaarxiv icon

Datasets for Studying Generalization from Easy to Hard Examples

Add code
Aug 13, 2021
Figure 1 for Datasets for Studying Generalization from Easy to Hard Examples
Figure 2 for Datasets for Studying Generalization from Easy to Hard Examples
Figure 3 for Datasets for Studying Generalization from Easy to Hard Examples
Viaarxiv icon

Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability

Add code
Aug 03, 2021
Figure 1 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 2 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 3 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 4 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Viaarxiv icon

Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks

Add code
Jun 08, 2021
Figure 1 for Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks
Figure 2 for Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks
Figure 3 for Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks
Figure 4 for Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks
Viaarxiv icon

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations

Add code
Mar 02, 2021
Viaarxiv icon

Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff

Add code
Nov 18, 2020
Figure 1 for Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Figure 2 for Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Figure 3 for Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Figure 4 for Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Viaarxiv icon