Picture for Tim Seyde

Tim Seyde

Faster Algorithms for Growing Collision-Free Convex Polytopes in Robot Configuration Space

Add code
Oct 16, 2024
Viaarxiv icon

Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution

Add code
Apr 05, 2024
Viaarxiv icon

Cooperative Flight Control Using Visual-Attention -- Air-Guardian

Add code
Dec 21, 2022
Viaarxiv icon

Solving Continuous Control via Q-learning

Add code
Oct 22, 2022
Viaarxiv icon

Interpreting Neural Policies with Disentangled Tree Representations

Add code
Oct 13, 2022
Figure 1 for Interpreting Neural Policies with Disentangled Tree Representations
Figure 2 for Interpreting Neural Policies with Disentangled Tree Representations
Figure 3 for Interpreting Neural Policies with Disentangled Tree Representations
Figure 4 for Interpreting Neural Policies with Disentangled Tree Representations
Viaarxiv icon

Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks

Add code
May 18, 2022
Figure 1 for Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks
Figure 2 for Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks
Figure 3 for Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks
Figure 4 for Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks
Viaarxiv icon

Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies

Add code
Nov 03, 2021
Figure 1 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 2 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 3 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 4 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Viaarxiv icon

Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space

Add code
Feb 19, 2021
Figure 1 for Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Figure 2 for Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Figure 3 for Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Figure 4 for Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Viaarxiv icon

Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles

Add code
Oct 27, 2020
Figure 1 for Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Figure 2 for Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Figure 3 for Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Figure 4 for Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Viaarxiv icon

Locomotion Planning through a Hybrid Bayesian Trajectory Optimization

Add code
Mar 09, 2019
Figure 1 for Locomotion Planning through a Hybrid Bayesian Trajectory Optimization
Figure 2 for Locomotion Planning through a Hybrid Bayesian Trajectory Optimization
Figure 3 for Locomotion Planning through a Hybrid Bayesian Trajectory Optimization
Figure 4 for Locomotion Planning through a Hybrid Bayesian Trajectory Optimization
Viaarxiv icon