Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Multi-agent imitation learning with function approximation: Linear Markov games and beyond

Add code
Feb 26, 2026
Viaarxiv icon

GraspLDP: Towards Generalizable Grasping Policy via Latent Diffusion

Add code
Feb 26, 2026
Viaarxiv icon

Risk-Aware World Model Predictive Control for Generalizable End-to-End Autonomous Driving

Add code
Feb 26, 2026
Viaarxiv icon

Biomechanical Comparisons Reveal Divergence of Human and Humanoid Gaits

Add code
Feb 25, 2026
Viaarxiv icon

Primary-Fine Decoupling for Action Generation in Robotic Imitation

Add code
Feb 25, 2026
Viaarxiv icon

Reinforcement-aware Knowledge Distillation for LLM Reasoning

Add code
Feb 26, 2026
Viaarxiv icon

Matching Multiple Experts: On the Exploitability of Multi-Agent Imitation Learning

Add code
Feb 24, 2026
Viaarxiv icon

RADAR: Reasoning as Discrimination with Aligned Representations for LLM-based Knowledge Graph Reasoning

Add code
Feb 25, 2026
Viaarxiv icon

EgoAVFlow: Robot Policy Learning with Active Vision from Human Egocentric Videos via 3D Flow

Add code
Feb 25, 2026
Viaarxiv icon

Beyond Mimicry: Toward Lifelong Adaptability in Imitation Learning

Add code
Feb 23, 2026
Viaarxiv icon