Picture for Vishwak Srinivasan

Vishwak Srinivasan

Near-Optimal Private Linear Regression via Iterative Hessian Mixing

Add code
Jan 12, 2026
Viaarxiv icon

Designing Algorithms for Entropic Optimal Transport from an Optimisation Perspective

Add code
Jul 16, 2025
Viaarxiv icon

The Gaussian Mixing Mechanism: Renyi Differential Privacy via Gaussian Sketches

Add code
May 30, 2025
Viaarxiv icon

High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm

Add code
Dec 24, 2024
Figure 1 for High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm
Figure 2 for High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm
Figure 3 for High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm
Figure 4 for High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm
Viaarxiv icon

Fast sampling from constrained spaces using the Metropolis-adjusted Mirror Langevin Algorithm

Add code
Dec 14, 2023
Figure 1 for Fast sampling from constrained spaces using the Metropolis-adjusted Mirror Langevin Algorithm
Figure 2 for Fast sampling from constrained spaces using the Metropolis-adjusted Mirror Langevin Algorithm
Figure 3 for Fast sampling from constrained spaces using the Metropolis-adjusted Mirror Langevin Algorithm
Figure 4 for Fast sampling from constrained spaces using the Metropolis-adjusted Mirror Langevin Algorithm
Viaarxiv icon

Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity

Add code
Jun 15, 2021
Figure 1 for Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity
Figure 2 for Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity
Figure 3 for Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity
Figure 4 for Sample Efficient Reinforcement Learning In Continuous State Spaces: A Perspective Beyond Linearity
Viaarxiv icon

On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks

Add code
Jul 21, 2018
Figure 1 for On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks
Figure 2 for On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks
Figure 3 for On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks
Figure 4 for On the Analysis of Trajectories of Gradient Descent in the Optimization of Deep Neural Networks
Viaarxiv icon

ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent

Add code
Dec 20, 2017
Figure 1 for ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent
Figure 2 for ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent
Figure 3 for ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent
Figure 4 for ADINE: An Adaptive Momentum Method for Stochastic Gradient Descent
Viaarxiv icon