Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

CoIRL-AD: Collaborative-Competitive Imitation-Reinforcement Learning in Latent World Models for Autonomous Driving

Add code
Oct 14, 2025
Viaarxiv icon

Reflection-Based Task Adaptation for Self-Improving VLA

Add code
Oct 14, 2025
Viaarxiv icon

Autonomous Soft Robotic Guidewire Navigation via Imitation Learning

Add code
Oct 10, 2025
Viaarxiv icon

Near-Optimal Second-Order Guarantees for Model-Based Adversarial Imitation Learning

Add code
Oct 10, 2025
Viaarxiv icon

Guiding Energy-Efficient Locomotion through Impact Mitigation Rewards

Add code
Oct 10, 2025
Viaarxiv icon

Failure Prediction at Runtime for Generative Robot Policies

Add code
Oct 10, 2025
Viaarxiv icon

DecompGAIL: Learning Realistic Traffic Behaviors with Decomposed Multi-Agent Generative Adversarial Imitation Learning

Add code
Oct 08, 2025
Viaarxiv icon

MobRT: A Digital Twin-Based Framework for Scalable Learning in Mobile Manipulation

Add code
Oct 06, 2025
Viaarxiv icon

MATRIX: Multimodal Agent Tuning for Robust Tool-Use Reasoning

Add code
Oct 09, 2025
Viaarxiv icon

Reliable and Scalable Robot Policy Evaluation with Imperfect Simulators

Add code
Oct 05, 2025
Viaarxiv icon