Picture for Jean Mercat

Jean Mercat

A Systematic Study of Data Modalities and Strategies for Co-training Large Behavior Models for Robot Manipulation

Add code
Feb 01, 2026
Viaarxiv icon

OpenThoughts: Data Recipes for Reasoning Models

Add code
Jun 05, 2025
Viaarxiv icon

Should VLMs be Pre-trained with Image Data?

Add code
Mar 10, 2025
Viaarxiv icon

Espresso: High Compression For Rich Extraction From Videos for Your Vision-Language Model

Add code
Dec 06, 2024
Viaarxiv icon

DataComp-LM: In search of the next generation of training sets for language models

Add code
Jun 18, 2024
Figure 1 for DataComp-LM: In search of the next generation of training sets for language models
Figure 2 for DataComp-LM: In search of the next generation of training sets for language models
Figure 3 for DataComp-LM: In search of the next generation of training sets for language models
Figure 4 for DataComp-LM: In search of the next generation of training sets for language models
Viaarxiv icon

Linearizing Large Language Models

Add code
May 10, 2024
Figure 1 for Linearizing Large Language Models
Figure 2 for Linearizing Large Language Models
Figure 3 for Linearizing Large Language Models
Figure 4 for Linearizing Large Language Models
Viaarxiv icon

DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset

Add code
Mar 19, 2024
Figure 1 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Figure 2 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Figure 3 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Figure 4 for DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Viaarxiv icon

Language models scale reliably with over-training and on downstream tasks

Add code
Mar 13, 2024
Figure 1 for Language models scale reliably with over-training and on downstream tasks
Figure 2 for Language models scale reliably with over-training and on downstream tasks
Figure 3 for Language models scale reliably with over-training and on downstream tasks
Figure 4 for Language models scale reliably with over-training and on downstream tasks
Viaarxiv icon

Residual Q-Learning: Offline and Online Policy Customization without Value

Add code
Jun 15, 2023
Figure 1 for Residual Q-Learning: Offline and Online Policy Customization without Value
Figure 2 for Residual Q-Learning: Offline and Online Policy Customization without Value
Figure 3 for Residual Q-Learning: Offline and Online Policy Customization without Value
Figure 4 for Residual Q-Learning: Offline and Online Policy Customization without Value
Viaarxiv icon

RAP: Risk-Aware Prediction for Robust Planning

Add code
Oct 04, 2022
Figure 1 for RAP: Risk-Aware Prediction for Robust Planning
Figure 2 for RAP: Risk-Aware Prediction for Robust Planning
Figure 3 for RAP: Risk-Aware Prediction for Robust Planning
Figure 4 for RAP: Risk-Aware Prediction for Robust Planning
Viaarxiv icon