Picture for Charles Ollion

Charles Ollion

CMAP, IP Paris

Diffusion bridges vector quantized Variational AutoEncoders

Add code
Feb 10, 2022
Figure 1 for Diffusion bridges vector quantized Variational AutoEncoders
Figure 2 for Diffusion bridges vector quantized Variational AutoEncoders
Figure 3 for Diffusion bridges vector quantized Variational AutoEncoders
Figure 4 for Diffusion bridges vector quantized Variational AutoEncoders
Viaarxiv icon

Learning Natural Language Generation from Scratch

Add code
Sep 20, 2021
Figure 1 for Learning Natural Language Generation from Scratch
Figure 2 for Learning Natural Language Generation from Scratch
Figure 3 for Learning Natural Language Generation from Scratch
Figure 4 for Learning Natural Language Generation from Scratch
Viaarxiv icon

Invertible Flow Non Equilibrium sampling

Add code
Mar 17, 2021
Figure 1 for Invertible Flow Non Equilibrium sampling
Figure 2 for Invertible Flow Non Equilibrium sampling
Figure 3 for Invertible Flow Non Equilibrium sampling
Figure 4 for Invertible Flow Non Equilibrium sampling
Viaarxiv icon

Joint self-supervised blind denoising and noise estimation

Add code
Feb 16, 2021
Figure 1 for Joint self-supervised blind denoising and noise estimation
Figure 2 for Joint self-supervised blind denoising and noise estimation
Figure 3 for Joint self-supervised blind denoising and noise estimation
Figure 4 for Joint self-supervised blind denoising and noise estimation
Viaarxiv icon

CORE: Color Regression for Multiple Colors Fashion Garments

Add code
Oct 06, 2020
Figure 1 for CORE: Color Regression for Multiple Colors Fashion Garments
Figure 2 for CORE: Color Regression for Multiple Colors Fashion Garments
Viaarxiv icon

The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction

Add code
Jul 15, 2020
Figure 1 for The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction
Figure 2 for The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction
Figure 3 for The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction
Figure 4 for The Monte Carlo Transformer: a stochastic self-attention model for sequence prediction
Viaarxiv icon

Insights from the Future for Continual Learning

Add code
Jun 24, 2020
Figure 1 for Insights from the Future for Continual Learning
Figure 2 for Insights from the Future for Continual Learning
Figure 3 for Insights from the Future for Continual Learning
Figure 4 for Insights from the Future for Continual Learning
Viaarxiv icon

Small-Task Incremental Learning

Add code
Apr 28, 2020
Figure 1 for Small-Task Incremental Learning
Figure 2 for Small-Task Incremental Learning
Figure 3 for Small-Task Incremental Learning
Figure 4 for Small-Task Incremental Learning
Viaarxiv icon

DistNet: Deep Tracking by displacement regression: application to bacteria growing in the Mother Machine

Add code
Mar 17, 2020
Figure 1 for DistNet: Deep Tracking by displacement regression: application to bacteria growing in the Mother Machine
Figure 2 for DistNet: Deep Tracking by displacement regression: application to bacteria growing in the Mother Machine
Figure 3 for DistNet: Deep Tracking by displacement regression: application to bacteria growing in the Mother Machine
Figure 4 for DistNet: Deep Tracking by displacement regression: application to bacteria growing in the Mother Machine
Viaarxiv icon

OMNIA Faster R-CNN: Detection in the wild through dataset merging and soft distillation

Add code
Dec 06, 2018
Figure 1 for OMNIA Faster R-CNN: Detection in the wild through dataset merging and soft distillation
Figure 2 for OMNIA Faster R-CNN: Detection in the wild through dataset merging and soft distillation
Figure 3 for OMNIA Faster R-CNN: Detection in the wild through dataset merging and soft distillation
Figure 4 for OMNIA Faster R-CNN: Detection in the wild through dataset merging and soft distillation
Viaarxiv icon