Picture for Elie Aljalbout

Elie Aljalbout

LIMT: Language-Informed Multi-Task Visual World Models

Add code
Jul 18, 2024
Viaarxiv icon

The Shortcomings of Force-from-Motion in Robot Learning

Add code
Jul 03, 2024
Viaarxiv icon

Guided Decoding for Robot Motion Generation and Adaption

Add code
Mar 22, 2024
Viaarxiv icon

On the Role of the Action Space in Robot Manipulation Learning and Sim-to-Real Transfer

Add code
Dec 06, 2023
Viaarxiv icon

CLAS: Coordinating Multi-Robot Manipulation with Central Latent Action Spaces

Add code
Nov 28, 2022
Viaarxiv icon

Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance Action Space

Add code
Oct 20, 2021
Figure 1 for Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance Action Space
Figure 2 for Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance Action Space
Figure 3 for Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance Action Space
Figure 4 for Learning Robotic Manipulation Skills Using an Adaptive Force-Impedance Action Space
Viaarxiv icon

Dual-Arm Adversarial Robot Learning

Add code
Oct 15, 2021
Figure 1 for Dual-Arm Adversarial Robot Learning
Viaarxiv icon

Learning to Centralize Dual-Arm Assembly

Add code
Oct 08, 2021
Figure 1 for Learning to Centralize Dual-Arm Assembly
Figure 2 for Learning to Centralize Dual-Arm Assembly
Figure 3 for Learning to Centralize Dual-Arm Assembly
Figure 4 for Learning to Centralize Dual-Arm Assembly
Viaarxiv icon

Seeking Visual Discomfort: Curiosity-driven Representations for Reinforcement Learning

Add code
Oct 02, 2021
Figure 1 for Seeking Visual Discomfort: Curiosity-driven Representations for Reinforcement Learning
Figure 2 for Seeking Visual Discomfort: Curiosity-driven Representations for Reinforcement Learning
Figure 3 for Seeking Visual Discomfort: Curiosity-driven Representations for Reinforcement Learning
Figure 4 for Seeking Visual Discomfort: Curiosity-driven Representations for Reinforcement Learning
Viaarxiv icon

Making Curiosity Explicit in Vision-based RL

Add code
Sep 28, 2021
Figure 1 for Making Curiosity Explicit in Vision-based RL
Figure 2 for Making Curiosity Explicit in Vision-based RL
Figure 3 for Making Curiosity Explicit in Vision-based RL
Viaarxiv icon