Picture for Hanting Chen

Hanting Chen

Learning Quantized Adaptive Conditions for Diffusion Models

Add code
Sep 26, 2024
Figure 1 for Learning Quantized Adaptive Conditions for Diffusion Models
Figure 2 for Learning Quantized Adaptive Conditions for Diffusion Models
Figure 3 for Learning Quantized Adaptive Conditions for Diffusion Models
Figure 4 for Learning Quantized Adaptive Conditions for Diffusion Models
Viaarxiv icon

One Step Diffusion-based Super-Resolution with Time-Aware Distillation

Add code
Aug 14, 2024
Viaarxiv icon

Multi-Granularity Semantic Revision for Large Language Model Distillation

Add code
Jul 14, 2024
Viaarxiv icon

Instruct-IPT: All-in-One Image Processing Transformer via Weight Modulation

Add code
Jun 30, 2024
Viaarxiv icon

GIM: A Million-scale Benchmark for Generative Image Manipulation Detection and Localization

Add code
Jun 24, 2024
Figure 1 for GIM: A Million-scale Benchmark for Generative Image Manipulation Detection and Localization
Figure 2 for GIM: A Million-scale Benchmark for Generative Image Manipulation Detection and Localization
Figure 3 for GIM: A Million-scale Benchmark for Generative Image Manipulation Detection and Localization
Figure 4 for GIM: A Million-scale Benchmark for Generative Image Manipulation Detection and Localization
Viaarxiv icon

Collaboration of Teachers for Semi-supervised Object Detection

Add code
May 22, 2024
Figure 1 for Collaboration of Teachers for Semi-supervised Object Detection
Figure 2 for Collaboration of Teachers for Semi-supervised Object Detection
Figure 3 for Collaboration of Teachers for Semi-supervised Object Detection
Figure 4 for Collaboration of Teachers for Semi-supervised Object Detection
Viaarxiv icon

U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers

Add code
May 04, 2024
Viaarxiv icon

LIPT: Latency-aware Image Processing Transformer

Add code
Apr 09, 2024
Viaarxiv icon

Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution

Add code
Apr 03, 2024
Figure 1 for Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution
Figure 2 for Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution
Figure 3 for Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution
Figure 4 for Knowledge Distillation with Multi-granularity Mixture of Priors for Image Super-Resolution
Viaarxiv icon

Distilling Semantic Priors from SAM to Efficient Image Restoration Models

Add code
Apr 02, 2024
Viaarxiv icon