Picture for Aurick Zhou

Aurick Zhou

MotionLM: Multi-Agent Motion Forecasting as Language Modeling

Add code
Sep 28, 2023
Viaarxiv icon

Wayformer: Motion Forecasting via Simple & Efficient Attention Networks

Add code
Jul 12, 2022
Figure 1 for Wayformer: Motion Forecasting via Simple & Efficient Attention Networks
Figure 2 for Wayformer: Motion Forecasting via Simple & Efficient Attention Networks
Figure 3 for Wayformer: Motion Forecasting via Simple & Efficient Attention Networks
Figure 4 for Wayformer: Motion Forecasting via Simple & Efficient Attention Networks
Viaarxiv icon

Training on Test Data with Bayesian Adaptation for Covariate Shift

Add code
Sep 27, 2021
Figure 1 for Training on Test Data with Bayesian Adaptation for Covariate Shift
Figure 2 for Training on Test Data with Bayesian Adaptation for Covariate Shift
Figure 3 for Training on Test Data with Bayesian Adaptation for Covariate Shift
Figure 4 for Training on Test Data with Bayesian Adaptation for Covariate Shift
Viaarxiv icon

MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning

Add code
Jul 18, 2021
Figure 1 for MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning
Figure 2 for MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning
Figure 3 for MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning
Figure 4 for MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning
Viaarxiv icon

Amortized Conditional Normalized Maximum Likelihood

Add code
Nov 05, 2020
Figure 1 for Amortized Conditional Normalized Maximum Likelihood
Figure 2 for Amortized Conditional Normalized Maximum Likelihood
Figure 3 for Amortized Conditional Normalized Maximum Likelihood
Figure 4 for Amortized Conditional Normalized Maximum Likelihood
Viaarxiv icon

Conservative Q-Learning for Offline Reinforcement Learning

Add code
Jun 29, 2020
Figure 1 for Conservative Q-Learning for Offline Reinforcement Learning
Figure 2 for Conservative Q-Learning for Offline Reinforcement Learning
Figure 3 for Conservative Q-Learning for Offline Reinforcement Learning
Figure 4 for Conservative Q-Learning for Offline Reinforcement Learning
Viaarxiv icon

Learning to Walk via Deep Reinforcement Learning

Add code
Mar 25, 2019
Figure 1 for Learning to Walk via Deep Reinforcement Learning
Figure 2 for Learning to Walk via Deep Reinforcement Learning
Figure 3 for Learning to Walk via Deep Reinforcement Learning
Figure 4 for Learning to Walk via Deep Reinforcement Learning
Viaarxiv icon

Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables

Add code
Mar 19, 2019
Figure 1 for Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables
Figure 2 for Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables
Figure 3 for Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables
Figure 4 for Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables
Viaarxiv icon

Soft Actor-Critic Algorithms and Applications

Add code
Jan 29, 2019
Figure 1 for Soft Actor-Critic Algorithms and Applications
Figure 2 for Soft Actor-Critic Algorithms and Applications
Figure 3 for Soft Actor-Critic Algorithms and Applications
Figure 4 for Soft Actor-Critic Algorithms and Applications
Viaarxiv icon

Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor

Add code
Aug 08, 2018
Figure 1 for Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Figure 2 for Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Figure 3 for Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Figure 4 for Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Viaarxiv icon