Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

AirExo-2: Scaling up Generalizable Robotic Imitation Learning with Low-Cost Exoskeletons

Add code
Mar 05, 2025
Viaarxiv icon

Action Tokenizer Matters in In-Context Imitation Learning

Add code
Mar 05, 2025
Viaarxiv icon

Continuous Control of Diverse Skills in Quadruped Robots Without Complete Expert Datasets

Add code
Mar 05, 2025
Viaarxiv icon

Perceptual Motor Learning with Active Inference Framework for Robust Lateral Control

Add code
Mar 05, 2025
Viaarxiv icon

Curating Demonstrations using Online Experience

Add code
Mar 05, 2025
Viaarxiv icon

FABG : End-to-end Imitation Learning for Embodied Affective Human-Robot Interaction

Add code
Mar 04, 2025
Viaarxiv icon

Variable-Friction In-Hand Manipulation for Arbitrary Objects via Diffusion-Based Imitation Learning

Add code
Mar 04, 2025
Viaarxiv icon

A2Perf: Real-World Autonomous Agents Benchmark

Add code
Mar 04, 2025
Viaarxiv icon

Reactive Diffusion Policy: Slow-Fast Visual-Tactile Policy Learning for Contact-Rich Manipulation

Add code
Mar 04, 2025
Viaarxiv icon

Zero-Shot Sim-to-Real Visual Quadrotor Control with Hard Constraints

Add code
Mar 04, 2025
Viaarxiv icon