Picture for Anshul Nasery

Anshul Nasery

OML: Open, Monetizable, and Loyal AI

Add code
Nov 01, 2024
Viaarxiv icon

PLeaS -- Merging Models with Permutations and Least Squares

Add code
Jul 02, 2024
Viaarxiv icon

PEEKABOO: Interactive Video Generation via Masked-Diffusion

Add code
Dec 12, 2023
Viaarxiv icon

Label Differential Privacy via Aggregation

Add code
Oct 20, 2023
Viaarxiv icon

End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates

Add code
Jun 13, 2023
Figure 1 for End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates
Figure 2 for End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates
Figure 3 for End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates
Figure 4 for End-to-End Neural Network Compression via $\frac{\ell_1}{\ell_2}$ Regularized Latency Surrogates
Viaarxiv icon

Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks

Add code
Oct 04, 2022
Figure 1 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 2 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 3 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Figure 4 for Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in Neural Networks
Viaarxiv icon

DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization

Add code
Aug 19, 2022
Figure 1 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 2 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 3 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Figure 4 for DAFT: Distilling Adversarially Fine-tuned Models for Better OOD Generalization
Viaarxiv icon

Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time

Add code
Aug 15, 2021
Figure 1 for Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
Figure 2 for Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
Figure 3 for Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
Figure 4 for Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time
Viaarxiv icon

Rule Augmented Unsupervised Constituency Parsing

Add code
May 21, 2021
Figure 1 for Rule Augmented Unsupervised Constituency Parsing
Figure 2 for Rule Augmented Unsupervised Constituency Parsing
Figure 3 for Rule Augmented Unsupervised Constituency Parsing
Figure 4 for Rule Augmented Unsupervised Constituency Parsing
Viaarxiv icon

What if Neural Networks had SVDs?

Add code
Sep 29, 2020
Figure 1 for What if Neural Networks had SVDs?
Figure 2 for What if Neural Networks had SVDs?
Figure 3 for What if Neural Networks had SVDs?
Figure 4 for What if Neural Networks had SVDs?
Viaarxiv icon