Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

IRIS: Learning-Driven Task-Specific Cinema Robot Arm for Visuomotor Motion Control

Add code
Feb 19, 2026
Viaarxiv icon

VIGOR: Visual Goal-In-Context Inference for Unified Humanoid Fall Safety

Add code
Feb 18, 2026
Viaarxiv icon

Learning Humanoid End-Effector Control for Open-Vocabulary Visual Loco-Manipulation

Add code
Feb 18, 2026
Viaarxiv icon

Learning to Retrieve Navigable Candidates for Efficient Vision-and-Language Navigation

Add code
Feb 17, 2026
Viaarxiv icon

BPP: Long-Context Robot Imitation Learning by Focusing on Key History Frames

Add code
Feb 16, 2026
Viaarxiv icon

GRAIL: Goal Recognition Alignment through Imitation Learning

Add code
Feb 15, 2026
Viaarxiv icon

DriveFine: Refining-Augmented Masked Diffusion VLA for Precise and Robust Driving

Add code
Feb 16, 2026
Viaarxiv icon

AdaptManip: Learning Adaptive Whole-Body Object Lifting and Delivery with Online Recurrent State Estimation

Add code
Feb 16, 2026
Viaarxiv icon

A Soft Wrist with Anisotropic and Selectable Stiffness for Robust Robot Learning in Contact-rich Manipulation

Add code
Feb 16, 2026
Viaarxiv icon

Beyond Imitation: Reinforcement Learning-Based Sim-Real Co-Training for VLA Models

Add code
Feb 16, 2026
Viaarxiv icon