Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Task-Centric Policy Optimization from Misaligned Motion Priors

Add code
Jan 27, 2026
Viaarxiv icon

Trustworthy Evaluation of Robotic Manipulation: A New Benchmark and AutoEval Methods

Add code
Jan 26, 2026
Viaarxiv icon

Less Is More: Scalable Visual Navigation from Limited Data

Add code
Jan 25, 2026
Viaarxiv icon

Towards Generalisable Imitation Learning Through Conditioned Transition Estimation and Online Behaviour Alignment

Add code
Jan 24, 2026
Viaarxiv icon

Dancing in Chains: Strategic Persuasion in Academic Rebuttal via Theory of Mind

Add code
Jan 27, 2026
Viaarxiv icon

EquiForm: Noise-Robust SE(3)-Equivariant Policy Learning from 3D Point Clouds

Add code
Jan 24, 2026
Viaarxiv icon

MetaWorld: Skill Transfer and Composition in a Hierarchical World Model for Grounding High-Level Instructions

Add code
Jan 24, 2026
Viaarxiv icon

ConceptACT: Episode-Level Concepts for Sample-Efficient Robotic Imitation Learning

Add code
Jan 23, 2026
Viaarxiv icon

Advancing Improvisation in Human-Robot Construction Collaboration: Taxonomy and Research Roadmap

Add code
Jan 23, 2026
Viaarxiv icon

EvoCUA: Evolving Computer Use Agents via Learning from Scalable Synthetic Experience

Add code
Jan 23, 2026
Viaarxiv icon