Picture for Alex Bie

Alex Bie

Private prediction for large-scale synthetic text generation

Add code
Jul 16, 2024
Viaarxiv icon

Distribution Learnability and Robustness

Add code
Jun 25, 2024
Viaarxiv icon

Parametric Feature Transfer: One-shot Federated Learning with Foundation Models

Add code
Feb 02, 2024
Viaarxiv icon

Normalization Is All You Need: Understanding Layer-Normalized Federated Learning under Extreme Label Shift

Add code
Aug 18, 2023
Viaarxiv icon

Private Distribution Learning with Public Data: The View from Sample Compression

Add code
Aug 14, 2023
Viaarxiv icon

Private GANs, Revisited

Add code
Feb 06, 2023
Viaarxiv icon

Private Estimation with Public Data

Add code
Aug 16, 2022
Viaarxiv icon

Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence

Add code
Nov 29, 2021
Figure 1 for Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence
Figure 2 for Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence
Figure 3 for Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence
Figure 4 for Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence
Viaarxiv icon

Fully Quantizing a Simplified Transformer for End-to-end Speech Recognition

Add code
Nov 09, 2019
Figure 1 for Fully Quantizing a Simplified Transformer for End-to-end Speech Recognition
Figure 2 for Fully Quantizing a Simplified Transformer for End-to-end Speech Recognition
Figure 3 for Fully Quantizing a Simplified Transformer for End-to-end Speech Recognition
Figure 4 for Fully Quantizing a Simplified Transformer for End-to-end Speech Recognition
Viaarxiv icon