Picture for Neha Gupta

Neha Gupta

Faster Cascades via Speculative Decoding

Add code
May 29, 2024
Viaarxiv icon

Language Model Cascades: Token-level uncertainty and beyond

Add code
Apr 15, 2024
Viaarxiv icon

When Does Confidence-Based Cascade Deferral Suffice?

Add code
Jul 06, 2023
Viaarxiv icon

Ensembling over Classifiers: a Bias-Variance Perspective

Add code
Jun 21, 2022
Figure 1 for Ensembling over Classifiers: a Bias-Variance Perspective
Figure 2 for Ensembling over Classifiers: a Bias-Variance Perspective
Figure 3 for Ensembling over Classifiers: a Bias-Variance Perspective
Figure 4 for Ensembling over Classifiers: a Bias-Variance Perspective
Viaarxiv icon

Understanding the bias-variance tradeoff of Bregman divergences

Add code
Feb 10, 2022
Figure 1 for Understanding the bias-variance tradeoff of Bregman divergences
Figure 2 for Understanding the bias-variance tradeoff of Bregman divergences
Viaarxiv icon

Estimating decision tree learnability with polylogarithmic sample complexity

Add code
Nov 03, 2020
Figure 1 for Estimating decision tree learnability with polylogarithmic sample complexity
Figure 2 for Estimating decision tree learnability with polylogarithmic sample complexity
Figure 3 for Estimating decision tree learnability with polylogarithmic sample complexity
Viaarxiv icon

Universal guarantees for decision tree induction via a higher-order splitting criterion

Add code
Oct 16, 2020
Viaarxiv icon

Active Local Learning

Add code
Sep 04, 2020
Viaarxiv icon

Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process

Add code
Apr 19, 2019
Figure 1 for Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process
Figure 2 for Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process
Figure 3 for Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process
Figure 4 for Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process
Viaarxiv icon

Exploiting Numerical Sparsity for Efficient Learning : Faster Eigenvector Computation and Regression

Add code
Nov 27, 2018
Figure 1 for Exploiting Numerical Sparsity for Efficient Learning : Faster Eigenvector Computation and Regression
Figure 2 for Exploiting Numerical Sparsity for Efficient Learning : Faster Eigenvector Computation and Regression
Viaarxiv icon