Picture for Samuli Laine

Samuli Laine

Guiding a Diffusion Model with a Bad Version of Itself

Add code
Jun 04, 2024
Figure 1 for Guiding a Diffusion Model with a Bad Version of Itself
Figure 2 for Guiding a Diffusion Model with a Bad Version of Itself
Figure 3 for Guiding a Diffusion Model with a Bad Version of Itself
Figure 4 for Guiding a Diffusion Model with a Bad Version of Itself
Viaarxiv icon

Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models

Add code
Apr 11, 2024
Figure 1 for Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models
Figure 2 for Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models
Figure 3 for Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models
Figure 4 for Applying Guidance in a Limited Interval Improves Sample and Distribution Quality in Diffusion Models
Viaarxiv icon

Analyzing and Improving the Training Dynamics of Diffusion Models

Add code
Dec 05, 2023
Viaarxiv icon

StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis

Add code
Jan 23, 2023
Figure 1 for StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis
Figure 2 for StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis
Figure 3 for StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis
Figure 4 for StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis
Viaarxiv icon

Projection-Domain Self-Supervision for Volumetric Helical CT Reconstruction

Add code
Dec 14, 2022
Viaarxiv icon

eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers

Add code
Nov 17, 2022
Figure 1 for eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers
Figure 2 for eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers
Figure 3 for eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers
Figure 4 for eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers
Viaarxiv icon

Disentangling Random and Cyclic Effects in Time-Lapse Sequences

Add code
Jul 04, 2022
Figure 1 for Disentangling Random and Cyclic Effects in Time-Lapse Sequences
Figure 2 for Disentangling Random and Cyclic Effects in Time-Lapse Sequences
Figure 3 for Disentangling Random and Cyclic Effects in Time-Lapse Sequences
Figure 4 for Disentangling Random and Cyclic Effects in Time-Lapse Sequences
Viaarxiv icon

Elucidating the Design Space of Diffusion-Based Generative Models

Add code
Jun 01, 2022
Figure 1 for Elucidating the Design Space of Diffusion-Based Generative Models
Figure 2 for Elucidating the Design Space of Diffusion-Based Generative Models
Figure 3 for Elucidating the Design Space of Diffusion-Based Generative Models
Figure 4 for Elucidating the Design Space of Diffusion-Based Generative Models
Viaarxiv icon

Alias-Free Generative Adversarial Networks

Add code
Jul 15, 2021
Figure 1 for Alias-Free Generative Adversarial Networks
Figure 2 for Alias-Free Generative Adversarial Networks
Figure 3 for Alias-Free Generative Adversarial Networks
Figure 4 for Alias-Free Generative Adversarial Networks
Viaarxiv icon

Modular Primitives for High-Performance Differentiable Rendering

Add code
Nov 06, 2020
Figure 1 for Modular Primitives for High-Performance Differentiable Rendering
Figure 2 for Modular Primitives for High-Performance Differentiable Rendering
Figure 3 for Modular Primitives for High-Performance Differentiable Rendering
Figure 4 for Modular Primitives for High-Performance Differentiable Rendering
Viaarxiv icon