Picture for Arthur Douillard

Arthur Douillard

WARP: On the Benefits of Weight Averaged Rewarded Policies

Add code
Jun 24, 2024
Figure 1 for WARP: On the Benefits of Weight Averaged Rewarded Policies
Figure 2 for WARP: On the Benefits of Weight Averaged Rewarded Policies
Figure 3 for WARP: On the Benefits of Weight Averaged Rewarded Policies
Figure 4 for WARP: On the Benefits of Weight Averaged Rewarded Policies
Viaarxiv icon

A Mechanism-Based Approach to Mitigating Harms from Persuasive Generative AI

Add code
Apr 23, 2024
Viaarxiv icon

DiPaCo: Distributed Path Composition

Add code
Mar 15, 2024
Viaarxiv icon

Asynchronous Local-SGD Training for Language Modeling

Add code
Jan 17, 2024
Figure 1 for Asynchronous Local-SGD Training for Language Modeling
Figure 2 for Asynchronous Local-SGD Training for Language Modeling
Figure 3 for Asynchronous Local-SGD Training for Language Modeling
Figure 4 for Asynchronous Local-SGD Training for Language Modeling
Viaarxiv icon

DiLoCo: Distributed Low-Communication Training of Language Models

Add code
Nov 14, 2023
Figure 1 for DiLoCo: Distributed Low-Communication Training of Language Models
Figure 2 for DiLoCo: Distributed Low-Communication Training of Language Models
Figure 3 for DiLoCo: Distributed Low-Communication Training of Language Models
Figure 4 for DiLoCo: Distributed Low-Communication Training of Language Models
Viaarxiv icon

Towards Compute-Optimal Transfer Learning

Add code
Apr 25, 2023
Figure 1 for Towards Compute-Optimal Transfer Learning
Figure 2 for Towards Compute-Optimal Transfer Learning
Figure 3 for Towards Compute-Optimal Transfer Learning
Figure 4 for Towards Compute-Optimal Transfer Learning
Viaarxiv icon

CoMFormer: Continual Learning in Semantic and Panoptic Segmentation

Add code
Nov 25, 2022
Viaarxiv icon

NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision Research

Add code
Nov 15, 2022
Viaarxiv icon

Foundational Models for Continual Learning: An Empirical Study of Latent Replay

Add code
Apr 30, 2022
Figure 1 for Foundational Models for Continual Learning: An Empirical Study of Latent Replay
Figure 2 for Foundational Models for Continual Learning: An Empirical Study of Latent Replay
Figure 3 for Foundational Models for Continual Learning: An Empirical Study of Latent Replay
Figure 4 for Foundational Models for Continual Learning: An Empirical Study of Latent Replay
Viaarxiv icon

Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation

Add code
Apr 25, 2022
Figure 1 for Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation
Figure 2 for Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation
Figure 3 for Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation
Figure 4 for Multi-Head Distillation for Continual Unsupervised Domain Adaptation in Semantic Segmentation
Viaarxiv icon