Picture for Michael Garcia-Ortiz

Michael Garcia-Ortiz

Are standard Object Segmentation models sufficient for Learning Affordance Segmentation?

Add code
Jul 05, 2021
Figure 1 for Are standard Object Segmentation models sufficient for Learning Affordance Segmentation?
Figure 2 for Are standard Object Segmentation models sufficient for Learning Affordance Segmentation?
Figure 3 for Are standard Object Segmentation models sufficient for Learning Affordance Segmentation?
Figure 4 for Are standard Object Segmentation models sufficient for Learning Affordance Segmentation?
Viaarxiv icon

SCOD: Active Object Detection for Embodied Agents using Sensory Commutativity of Action Sequences

Add code
Jul 05, 2021
Figure 1 for SCOD: Active Object Detection for Embodied Agents using Sensory Commutativity of Action Sequences
Figure 2 for SCOD: Active Object Detection for Embodied Agents using Sensory Commutativity of Action Sequences
Figure 3 for SCOD: Active Object Detection for Embodied Agents using Sensory Commutativity of Action Sequences
Figure 4 for SCOD: Active Object Detection for Embodied Agents using Sensory Commutativity of Action Sequences
Viaarxiv icon

On the Sensory Commutativity of Action Sequences for Embodied Agents

Add code
Feb 13, 2020
Figure 1 for On the Sensory Commutativity of Action Sequences for Embodied Agents
Figure 2 for On the Sensory Commutativity of Action Sequences for Embodied Agents
Figure 3 for On the Sensory Commutativity of Action Sequences for Embodied Agents
Figure 4 for On the Sensory Commutativity of Action Sequences for Embodied Agents
Viaarxiv icon

Symmetry-Based Disentangled Representation Learning requires Interaction with Environments

Add code
Mar 30, 2019
Figure 1 for Symmetry-Based Disentangled Representation Learning requires Interaction with Environments
Figure 2 for Symmetry-Based Disentangled Representation Learning requires Interaction with Environments
Figure 3 for Symmetry-Based Disentangled Representation Learning requires Interaction with Environments
Viaarxiv icon

S-TRIGGER: Continual State Representation Learning via Self-Triggered Generative Replay

Add code
Feb 25, 2019
Figure 1 for S-TRIGGER: Continual State Representation Learning via Self-Triggered Generative Replay
Figure 2 for S-TRIGGER: Continual State Representation Learning via Self-Triggered Generative Replay
Figure 3 for S-TRIGGER: Continual State Representation Learning via Self-Triggered Generative Replay
Figure 4 for S-TRIGGER: Continual State Representation Learning via Self-Triggered Generative Replay
Viaarxiv icon

Generative Models from the perspective of Continual Learning

Add code
Dec 21, 2018
Figure 1 for Generative Models from the perspective of Continual Learning
Figure 2 for Generative Models from the perspective of Continual Learning
Figure 3 for Generative Models from the perspective of Continual Learning
Figure 4 for Generative Models from the perspective of Continual Learning
Viaarxiv icon

Continual State Representation Learning for Reinforcement Learning using Generative Replay

Add code
Nov 02, 2018
Figure 1 for Continual State Representation Learning for Reinforcement Learning using Generative Replay
Figure 2 for Continual State Representation Learning for Reinforcement Learning using Generative Replay
Figure 3 for Continual State Representation Learning for Reinforcement Learning using Generative Replay
Figure 4 for Continual State Representation Learning for Reinforcement Learning using Generative Replay
Viaarxiv icon

Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning

Add code
Sep 10, 2018
Figure 1 for Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning
Figure 2 for Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning
Figure 3 for Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning
Figure 4 for Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning
Viaarxiv icon