Picture for Michael Lutter

Michael Lutter

Diminishing Return of Value Expansion Methods in Model-Based Reinforcement Learning

Add code
Mar 07, 2023
Viaarxiv icon

Revisiting Model-based Value Expansion

Add code
Mar 28, 2022
Figure 1 for Revisiting Model-based Value Expansion
Figure 2 for Revisiting Model-based Value Expansion
Viaarxiv icon

A Differentiable Newton-Euler Algorithm for Real-World Robotics

Add code
Oct 24, 2021
Figure 1 for A Differentiable Newton-Euler Algorithm for Real-World Robotics
Figure 2 for A Differentiable Newton-Euler Algorithm for Real-World Robotics
Figure 3 for A Differentiable Newton-Euler Algorithm for Real-World Robotics
Figure 4 for A Differentiable Newton-Euler Algorithm for Real-World Robotics
Viaarxiv icon

Continuous-Time Fitted Value Iteration for Robust Policies

Add code
Oct 05, 2021
Figure 1 for Continuous-Time Fitted Value Iteration for Robust Policies
Figure 2 for Continuous-Time Fitted Value Iteration for Robust Policies
Figure 3 for Continuous-Time Fitted Value Iteration for Robust Policies
Figure 4 for Continuous-Time Fitted Value Iteration for Robust Policies
Viaarxiv icon

Combining Physics and Deep Learning to learn Continuous-Time Dynamics Models

Add code
Oct 05, 2021
Figure 1 for Combining Physics and Deep Learning to learn Continuous-Time Dynamics Models
Figure 2 for Combining Physics and Deep Learning to learn Continuous-Time Dynamics Models
Figure 3 for Combining Physics and Deep Learning to learn Continuous-Time Dynamics Models
Figure 4 for Combining Physics and Deep Learning to learn Continuous-Time Dynamics Models
Viaarxiv icon

Learning Dynamics Models for Model Predictive Agents

Add code
Sep 29, 2021
Figure 1 for Learning Dynamics Models for Model Predictive Agents
Figure 2 for Learning Dynamics Models for Model Predictive Agents
Figure 3 for Learning Dynamics Models for Model Predictive Agents
Figure 4 for Learning Dynamics Models for Model Predictive Agents
Viaarxiv icon

Robust Value Iteration for Continuous Control Tasks

Add code
May 25, 2021
Figure 1 for Robust Value Iteration for Continuous Control Tasks
Figure 2 for Robust Value Iteration for Continuous Control Tasks
Figure 3 for Robust Value Iteration for Continuous Control Tasks
Figure 4 for Robust Value Iteration for Continuous Control Tasks
Viaarxiv icon

Value Iteration in Continuous Actions, States and Time

Add code
May 10, 2021
Figure 1 for Value Iteration in Continuous Actions, States and Time
Figure 2 for Value Iteration in Continuous Actions, States and Time
Figure 3 for Value Iteration in Continuous Actions, States and Time
Figure 4 for Value Iteration in Continuous Actions, States and Time
Viaarxiv icon

Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning

Add code
Nov 03, 2020
Figure 1 for Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning
Figure 2 for Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning
Figure 3 for Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning
Viaarxiv icon

High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards

Add code
Oct 31, 2020
Figure 1 for High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards
Figure 2 for High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards
Figure 3 for High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards
Figure 4 for High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards
Viaarxiv icon