Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

KinDER: A Physical Reasoning Benchmark for Robot Learning and Planning

Add code
Apr 28, 2026
Viaarxiv icon

Tube Diffusion Policy: Reactive Visual-Tactile Policy Learning for Contact-rich Manipulation

Add code
Apr 26, 2026
Viaarxiv icon

From Coarse to Fine: Self-Adaptive Hierarchical Planning for LLM Agents

Add code
Apr 25, 2026
Viaarxiv icon

Learning from the Best: Smoothness-Driven Metrics for Data Quality in Imitation Learning

Add code
Apr 24, 2026
Viaarxiv icon

Learning from Demonstration with Failure Awareness for Safe Robot Navigation

Add code
Apr 25, 2026
Viaarxiv icon

GCImOpt: Learning efficient goal-conditioned policies by imitating optimal trajectories

Add code
Apr 24, 2026
Viaarxiv icon

Learn Weightlessness: Imitate Non-Self-Stabilizing Motions on Humanoid Robot

Add code
Apr 23, 2026
Viaarxiv icon

RPG: Robust Policy Gating for Smooth Multi-Skill Transitions in Humanoid Fighting

Add code
Apr 23, 2026
Viaarxiv icon

Nemobot Games: Crafting Strategic AI Gaming Agents for Interactive Learning with Large Language Models

Add code
Apr 23, 2026
Viaarxiv icon

FingerEye: Continuous and Unified Vision-Tactile Sensing for Dexterous Manipulation

Add code
Apr 22, 2026
Viaarxiv icon