Imitation Learning


Imitation learning is a framework for learning a behavior policy from demonstrations. Usually, demonstrations are presented in the form of state-action trajectories, with each pair indicating the action to take at the state being visited. In order to learn the behavior policy, the demonstrated actions are usually utilized in two ways. The first, known as Behavior Cloning (BC), treats the action as the target label for each state, and then learns a generalized mapping from states to actions in a supervised manner. Another way, known as Inverse Reinforcement Learning (IRL), views the demonstrated actions as a sequence of decisions, and aims at finding a reward/cost function under which the demonstrated decisions are optimal.

Dream to Manipulate: Compositional World Models Empowering Robot Imitation Learning with Imagination

Add code
Dec 19, 2024
Viaarxiv icon

Policy Decorator: Model-Agnostic Online Refinement for Large Policy Model

Add code
Dec 18, 2024
Viaarxiv icon

RoboMIND: Benchmark on Multi-embodiment Intelligence Normative Data for Robot Manipulation

Add code
Dec 18, 2024
Viaarxiv icon

Human-Humanoid Robots Cross-Embodiment Behavior-Skill Transfer Using Decomposed Adversarial Learning from Demonstration

Add code
Dec 19, 2024
Viaarxiv icon

Auto-bidding in real-time auctions via Oracle Imitation Learning (OIL)

Add code
Dec 17, 2024
Viaarxiv icon

When Should We Prefer State-to-Visual DAgger Over Visual Reinforcement Learning?

Add code
Dec 18, 2024
Viaarxiv icon

Efficient Diffusion Transformer Policies with Mixture of Expert Denoisers for Multitask Learning

Add code
Dec 17, 2024
Viaarxiv icon

Chain-of-MetaWriting: Linguistic and Textual Analysis of How Small Language Models Write Young Students Texts

Add code
Dec 19, 2024
Viaarxiv icon

Knowledge Injection via Prompt Distillation

Add code
Dec 19, 2024
Viaarxiv icon

Scaling of Search and Learning: A Roadmap to Reproduce o1 from Reinforcement Learning Perspective

Add code
Dec 18, 2024
Viaarxiv icon