Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Videos are Sample-Efficient Supervisions: Behavior Cloning from Videos via Latent Representations

Add code
Dec 25, 2025
Viaarxiv icon

Proprioception Enhances Vision Language Model in Generating Captions and Subtask Segmentations for Robot Task

Add code
Dec 24, 2025
Viaarxiv icon

RoboCade: Gamifying Robot Data Collection

Add code
Dec 24, 2025
Viaarxiv icon

LEAD: Minimizing Learner-Expert Asymmetry in End-to-End Driving

Add code
Dec 23, 2025
Viaarxiv icon

SD2AIL: Adversarial Imitation Learning from Synthetic Demonstrations via Diffusion Models

Add code
Dec 21, 2025
Viaarxiv icon

Learning Generalizable Hand-Object Tracking from Synthetic Demonstrations

Add code
Dec 22, 2025
Viaarxiv icon

TakeAD: Preference-based Post-optimization for End-to-end Autonomous Driving with Expert Takeover Data

Add code
Dec 22, 2025
Viaarxiv icon

DTCCL: Disengagement-Triggered Contrastive Continual Learning for Autonomous Bus Planners

Add code
Dec 22, 2025
Viaarxiv icon

Are All Data Necessary? Efficient Data Pruning for Large-scale Autonomous Driving Dataset via Trajectory Entropy Maximization

Add code
Dec 22, 2025
Viaarxiv icon

Offline Reinforcement Learning for End-to-End Autonomous Driving

Add code
Dec 21, 2025
Viaarxiv icon