Picture for Mihailo R. Jovanović

Mihailo R. Jovanović

From exponential to finite/fixed-time stability: Applications to optimization

Add code
Sep 18, 2024
Viaarxiv icon

Stability of Primal-Dual Gradient Flow Dynamics for Multi-Block Convex Optimization Problems

Add code
Aug 28, 2024
Viaarxiv icon

Accelerated forward-backward and Douglas-Rachford splitting dynamics

Add code
Jul 30, 2024
Viaarxiv icon

Provably Efficient Generalized Lagrangian Policy Optimization for Safe Multi-Agent Reinforcement Learning

Add code
May 31, 2023
Viaarxiv icon

Tradeoffs between convergence rate and noise amplification for momentum-based accelerated optimization algorithms

Add code
Sep 24, 2022
Figure 1 for Tradeoffs between convergence rate and noise amplification for momentum-based accelerated optimization algorithms
Figure 2 for Tradeoffs between convergence rate and noise amplification for momentum-based accelerated optimization algorithms
Figure 3 for Tradeoffs between convergence rate and noise amplification for momentum-based accelerated optimization algorithms
Figure 4 for Tradeoffs between convergence rate and noise amplification for momentum-based accelerated optimization algorithms
Viaarxiv icon

Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs

Add code
Jun 06, 2022
Figure 1 for Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs
Figure 2 for Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs
Figure 3 for Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs
Figure 4 for Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs
Viaarxiv icon

Independent Policy Gradient for Large-Scale Markov Potential Games: Sharper Rates, Function Approximation, and Game-Agnostic Convergence

Add code
Feb 08, 2022
Figure 1 for Independent Policy Gradient for Large-Scale Markov Potential Games: Sharper Rates, Function Approximation, and Game-Agnostic Convergence
Figure 2 for Independent Policy Gradient for Large-Scale Markov Potential Games: Sharper Rates, Function Approximation, and Game-Agnostic Convergence
Figure 3 for Independent Policy Gradient for Large-Scale Markov Potential Games: Sharper Rates, Function Approximation, and Game-Agnostic Convergence
Figure 4 for Independent Policy Gradient for Large-Scale Markov Potential Games: Sharper Rates, Function Approximation, and Game-Agnostic Convergence
Viaarxiv icon

Transient growth of accelerated first-order methods for strongly convex optimization problems

Add code
Mar 14, 2021
Figure 1 for Transient growth of accelerated first-order methods for strongly convex optimization problems
Figure 2 for Transient growth of accelerated first-order methods for strongly convex optimization problems
Figure 3 for Transient growth of accelerated first-order methods for strongly convex optimization problems
Figure 4 for Transient growth of accelerated first-order methods for strongly convex optimization problems
Viaarxiv icon

Provably Efficient Safe Exploration via Primal-Dual Policy Optimization

Add code
Mar 01, 2020
Viaarxiv icon

Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem

Add code
Dec 26, 2019
Figure 1 for Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem
Figure 2 for Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem
Figure 3 for Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem
Figure 4 for Convergence and sample complexity of gradient methods for the model-free linear quadratic regulator problem
Viaarxiv icon