Picture for Sekhar Tatikonda

Sekhar Tatikonda

Surrogate Gap Minimization Improves Sharpness-Aware Training

Add code
Mar 19, 2022
Figure 1 for Surrogate Gap Minimization Improves Sharpness-Aware Training
Figure 2 for Surrogate Gap Minimization Improves Sharpness-Aware Training
Figure 3 for Surrogate Gap Minimization Improves Sharpness-Aware Training
Figure 4 for Surrogate Gap Minimization Improves Sharpness-Aware Training
Viaarxiv icon

Momentum Centering and Asynchronous Update for Adaptive Gradient Methods

Add code
Oct 17, 2021
Figure 1 for Momentum Centering and Asynchronous Update for Adaptive Gradient Methods
Figure 2 for Momentum Centering and Asynchronous Update for Adaptive Gradient Methods
Figure 3 for Momentum Centering and Asynchronous Update for Adaptive Gradient Methods
Figure 4 for Momentum Centering and Asynchronous Update for Adaptive Gradient Methods
Viaarxiv icon

MALI: A memory efficient and reverse accurate integrator for Neural ODEs

Add code
Mar 03, 2021
Figure 1 for MALI: A memory efficient and reverse accurate integrator for Neural ODEs
Figure 2 for MALI: A memory efficient and reverse accurate integrator for Neural ODEs
Figure 3 for MALI: A memory efficient and reverse accurate integrator for Neural ODEs
Figure 4 for MALI: A memory efficient and reverse accurate integrator for Neural ODEs
Viaarxiv icon

Multiple-shooting adjoint method for whole-brain dynamic causal modeling

Add code
Feb 14, 2021
Figure 1 for Multiple-shooting adjoint method for whole-brain dynamic causal modeling
Figure 2 for Multiple-shooting adjoint method for whole-brain dynamic causal modeling
Figure 3 for Multiple-shooting adjoint method for whole-brain dynamic causal modeling
Figure 4 for Multiple-shooting adjoint method for whole-brain dynamic causal modeling
Viaarxiv icon

AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients

Add code
Oct 24, 2020
Figure 1 for AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients
Figure 2 for AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients
Figure 3 for AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients
Figure 4 for AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients
Viaarxiv icon

Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE

Add code
Jun 03, 2020
Figure 1 for Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE
Figure 2 for Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE
Figure 3 for Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE
Figure 4 for Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE
Viaarxiv icon

Zero-shot Transfer Learning for Semantic Parsing

Add code
Aug 27, 2018
Figure 1 for Zero-shot Transfer Learning for Semantic Parsing
Figure 2 for Zero-shot Transfer Learning for Semantic Parsing
Figure 3 for Zero-shot Transfer Learning for Semantic Parsing
Figure 4 for Zero-shot Transfer Learning for Semantic Parsing
Viaarxiv icon

Sequence to Logic with Copy and Cache

Add code
Jul 19, 2018
Figure 1 for Sequence to Logic with Copy and Cache
Figure 2 for Sequence to Logic with Copy and Cache
Figure 3 for Sequence to Logic with Copy and Cache
Figure 4 for Sequence to Logic with Copy and Cache
Viaarxiv icon

A new approach to Laplacian solvers and flow problems

Add code
Nov 22, 2016
Figure 1 for A new approach to Laplacian solvers and flow problems
Figure 2 for A new approach to Laplacian solvers and flow problems
Figure 3 for A new approach to Laplacian solvers and flow problems
Figure 4 for A new approach to Laplacian solvers and flow problems
Viaarxiv icon

Scale-free network optimization: foundations and algorithms

Add code
Feb 12, 2016
Viaarxiv icon