Picture for Mikael Henaff

Mikael Henaff

MaestroMotif: Skill Design from Artificial Intelligence Feedback

Add code
Dec 11, 2024
Viaarxiv icon

Online Intrinsic Rewards for Decision Making Agents from Large Language Model Feedback

Add code
Oct 30, 2024
Viaarxiv icon

Generalization to New Sequential Decision Making Tasks with In-Context Learning

Add code
Dec 06, 2023
Viaarxiv icon

Motif: Intrinsic Motivation from Artificial Intelligence Feedback

Add code
Sep 29, 2023
Viaarxiv icon

A Study of Global and Episodic Bonuses for Exploration in Contextual MDPs

Add code
Jun 05, 2023
Viaarxiv icon

Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories

Add code
Oct 12, 2022
Figure 1 for Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories
Figure 2 for Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories
Figure 3 for Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories
Figure 4 for Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories
Viaarxiv icon

Exploration via Elliptical Episodic Bonuses

Add code
Oct 11, 2022
Figure 1 for Exploration via Elliptical Episodic Bonuses
Figure 2 for Exploration via Elliptical Episodic Bonuses
Figure 3 for Exploration via Elliptical Episodic Bonuses
Figure 4 for Exploration via Elliptical Episodic Bonuses
Viaarxiv icon

PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning

Add code
Aug 13, 2020
Figure 1 for PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
Figure 2 for PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
Figure 3 for PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
Figure 4 for PC-PG: Policy Cover Directed Exploration for Provable Policy Gradient Learning
Viaarxiv icon

Explicit Explore-Exploit Algorithms in Continuous State Spaces

Add code
Dec 02, 2019
Figure 1 for Explicit Explore-Exploit Algorithms in Continuous State Spaces
Figure 2 for Explicit Explore-Exploit Algorithms in Continuous State Spaces
Figure 3 for Explicit Explore-Exploit Algorithms in Continuous State Spaces
Figure 4 for Explicit Explore-Exploit Algorithms in Continuous State Spaces
Viaarxiv icon

Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning

Add code
Nov 13, 2019
Figure 1 for Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning
Figure 2 for Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning
Figure 3 for Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning
Figure 4 for Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning
Viaarxiv icon