Picture for Zhisheng Xiao

Zhisheng Xiao

Imagen 3

Add code
Aug 13, 2024
Viaarxiv icon

EM Distillation for One-step Diffusion Models

Add code
May 27, 2024
Viaarxiv icon

DreamInpainter: Text-Guided Subject-Driven Image Inpainting with Diffusion Models

Add code
Dec 05, 2023
Viaarxiv icon

HiFi Tuner: High-Fidelity Subject-Driven Fine-Tuning for Diffusion Models

Add code
Nov 30, 2023
Viaarxiv icon

UFOGen: You Forward Once Large Scale Text-to-Image Generation via Diffusion GANs

Add code
Nov 29, 2023
Viaarxiv icon

MobileDiffusion: Subsecond Text-to-Image Generation on Mobile Devices

Add code
Nov 28, 2023
Viaarxiv icon

Adaptive Multi-stage Density Ratio Estimation for Learning Latent Space Energy-based Model

Add code
Sep 19, 2022
Figure 1 for Adaptive Multi-stage Density Ratio Estimation for Learning Latent Space Energy-based Model
Figure 2 for Adaptive Multi-stage Density Ratio Estimation for Learning Latent Space Energy-based Model
Figure 3 for Adaptive Multi-stage Density Ratio Estimation for Learning Latent Space Energy-based Model
Figure 4 for Adaptive Multi-stage Density Ratio Estimation for Learning Latent Space Energy-based Model
Viaarxiv icon

Tackling the Generative Learning Trilemma with Denoising Diffusion GANs

Add code
Dec 15, 2021
Figure 1 for Tackling the Generative Learning Trilemma with Denoising Diffusion GANs
Figure 2 for Tackling the Generative Learning Trilemma with Denoising Diffusion GANs
Figure 3 for Tackling the Generative Learning Trilemma with Denoising Diffusion GANs
Figure 4 for Tackling the Generative Learning Trilemma with Denoising Diffusion GANs
Viaarxiv icon

Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?

Add code
May 19, 2021
Figure 1 for Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?
Figure 2 for Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?
Figure 3 for Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?
Figure 4 for Do We Really Need to Learn Representations from In-domain Data for Outlier Detection?
Viaarxiv icon

EBMs Trained with Maximum Likelihood are Generator Models Trained with a Self-adverserial Loss

Add code
Feb 23, 2021
Figure 1 for EBMs Trained with Maximum Likelihood are Generator Models Trained with a Self-adverserial Loss
Figure 2 for EBMs Trained with Maximum Likelihood are Generator Models Trained with a Self-adverserial Loss
Figure 3 for EBMs Trained with Maximum Likelihood are Generator Models Trained with a Self-adverserial Loss
Figure 4 for EBMs Trained with Maximum Likelihood are Generator Models Trained with a Self-adverserial Loss
Viaarxiv icon