Picture for Zhendong Wang

Zhendong Wang

Enhancing and Accelerating Diffusion-Based Inverse Problem Solving through Measurements Optimization

Add code
Dec 05, 2024
Viaarxiv icon

Effort: Efficient Orthogonal Modeling for Generalizable AI-Generated Image Detection

Add code
Nov 23, 2024
Viaarxiv icon

One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation

Add code
Oct 28, 2024
Figure 1 for One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation
Figure 2 for One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation
Figure 3 for One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation
Figure 4 for One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation
Viaarxiv icon

Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step

Add code
Oct 19, 2024
Figure 1 for Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step
Figure 2 for Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step
Figure 3 for Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step
Figure 4 for Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step
Viaarxiv icon

Diffusion-RPO: Aligning Diffusion Models through Relative Preference Optimization

Add code
Jun 10, 2024
Viaarxiv icon

Long and Short Guidance in Score identity Distillation for One-Step Text-to-Image Generation

Add code
Jun 03, 2024
Figure 1 for Long and Short Guidance in Score identity Distillation for One-Step Text-to-Image Generation
Figure 2 for Long and Short Guidance in Score identity Distillation for One-Step Text-to-Image Generation
Figure 3 for Long and Short Guidance in Score identity Distillation for One-Step Text-to-Image Generation
Figure 4 for Long and Short Guidance in Score identity Distillation for One-Step Text-to-Image Generation
Viaarxiv icon

Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment

Add code
May 31, 2024
Figure 1 for Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment
Figure 2 for Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment
Figure 3 for Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment
Figure 4 for Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment
Viaarxiv icon

Diffusion Policies creating a Trust Region for Offline Reinforcement Learning

Add code
May 31, 2024
Figure 1 for Diffusion Policies creating a Trust Region for Offline Reinforcement Learning
Figure 2 for Diffusion Policies creating a Trust Region for Offline Reinforcement Learning
Figure 3 for Diffusion Policies creating a Trust Region for Offline Reinforcement Learning
Figure 4 for Diffusion Policies creating a Trust Region for Offline Reinforcement Learning
Viaarxiv icon

Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation

Add code
Apr 05, 2024
Figure 1 for Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation
Figure 2 for Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation
Figure 3 for Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation
Figure 4 for Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation
Viaarxiv icon

Take the Bull by the Horns: Hard Sample-Reweighted Continual Training Improves LLM Generalization

Add code
Mar 01, 2024
Viaarxiv icon