Picture for Marvin Zhang

Marvin Zhang

Tony

GPT-4o System Card

Add code
Oct 25, 2024
Viaarxiv icon

MEMO: Test Time Robustness via Adaptation and Augmentation

Add code
Oct 18, 2021
Figure 1 for MEMO: Test Time Robustness via Adaptation and Augmentation
Figure 2 for MEMO: Test Time Robustness via Adaptation and Augmentation
Figure 3 for MEMO: Test Time Robustness via Adaptation and Augmentation
Figure 4 for MEMO: Test Time Robustness via Adaptation and Augmentation
Viaarxiv icon

WILDS: A Benchmark of in-the-Wild Distribution Shifts

Add code
Dec 14, 2020
Figure 1 for WILDS: A Benchmark of in-the-Wild Distribution Shifts
Figure 2 for WILDS: A Benchmark of in-the-Wild Distribution Shifts
Figure 3 for WILDS: A Benchmark of in-the-Wild Distribution Shifts
Figure 4 for WILDS: A Benchmark of in-the-Wild Distribution Shifts
Viaarxiv icon

Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift

Add code
Jul 06, 2020
Figure 1 for Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
Figure 2 for Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
Figure 3 for Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
Figure 4 for Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
Viaarxiv icon

AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos

Add code
Dec 10, 2019
Figure 1 for AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos
Figure 2 for AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos
Figure 3 for AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos
Figure 4 for AVID: Learning Multi-Stage Tasks via Pixel-Level Translation of Human Videos
Viaarxiv icon

When to Trust Your Model: Model-Based Policy Optimization

Add code
Jun 19, 2019
Figure 1 for When to Trust Your Model: Model-Based Policy Optimization
Figure 2 for When to Trust Your Model: Model-Based Policy Optimization
Figure 3 for When to Trust Your Model: Model-Based Policy Optimization
Figure 4 for When to Trust Your Model: Model-Based Policy Optimization
Viaarxiv icon

SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning

Add code
Feb 20, 2019
Figure 1 for SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning
Figure 2 for SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning
Figure 3 for SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning
Figure 4 for SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning
Viaarxiv icon

Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning

Add code
Jun 18, 2017
Figure 1 for Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning
Figure 2 for Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning
Figure 3 for Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning
Figure 4 for Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning
Viaarxiv icon

Deep Reinforcement Learning for Tensegrity Robot Locomotion

Add code
Mar 08, 2017
Figure 1 for Deep Reinforcement Learning for Tensegrity Robot Locomotion
Figure 2 for Deep Reinforcement Learning for Tensegrity Robot Locomotion
Figure 3 for Deep Reinforcement Learning for Tensegrity Robot Locomotion
Figure 4 for Deep Reinforcement Learning for Tensegrity Robot Locomotion
Viaarxiv icon

Learning Deep Neural Network Policies with Continuous Memory States

Add code
Sep 23, 2015
Figure 1 for Learning Deep Neural Network Policies with Continuous Memory States
Figure 2 for Learning Deep Neural Network Policies with Continuous Memory States
Figure 3 for Learning Deep Neural Network Policies with Continuous Memory States
Figure 4 for Learning Deep Neural Network Policies with Continuous Memory States
Viaarxiv icon