Picture for Hongzhi Zhang

Hongzhi Zhang

UniRestorer: Universal Image Restoration via Adaptively Estimating Image Degradation at Proper Granularity

Add code
Dec 28, 2024
Figure 1 for UniRestorer: Universal Image Restoration via Adaptively Estimating Image Degradation at Proper Granularity
Figure 2 for UniRestorer: Universal Image Restoration via Adaptively Estimating Image Degradation at Proper Granularity
Figure 3 for UniRestorer: Universal Image Restoration via Adaptively Estimating Image Degradation at Proper Granularity
Figure 4 for UniRestorer: Universal Image Restoration via Adaptively Estimating Image Degradation at Proper Granularity
Viaarxiv icon

VitaGlyph: Vitalizing Artistic Typography with Flexible Dual-branch Diffusion Models

Add code
Oct 02, 2024
Figure 1 for VitaGlyph: Vitalizing Artistic Typography with Flexible Dual-branch Diffusion Models
Figure 2 for VitaGlyph: Vitalizing Artistic Typography with Flexible Dual-branch Diffusion Models
Figure 3 for VitaGlyph: Vitalizing Artistic Typography with Flexible Dual-branch Diffusion Models
Figure 4 for VitaGlyph: Vitalizing Artistic Typography with Flexible Dual-branch Diffusion Models
Viaarxiv icon

Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector

Add code
Jun 17, 2024
Figure 1 for Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector
Figure 2 for Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector
Figure 3 for Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector
Figure 4 for Small Agent Can Also Rock! Empowering Small Language Models as Hallucination Detector
Viaarxiv icon

Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs

Add code
May 24, 2024
Figure 1 for Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs
Figure 2 for Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs
Figure 3 for Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs
Figure 4 for Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs
Viaarxiv icon

A Cross-Field Fusion Strategy for Drug-Target Interaction Prediction

Add code
May 23, 2024
Viaarxiv icon

MasterWeaver: Taming Editability and Identity for Personalized Text-to-Image Generation

Add code
May 10, 2024
Viaarxiv icon

Beyond the Sequence: Statistics-Driven Pre-training for Stabilizing Sequential Recommendation Model

Add code
Apr 08, 2024
Viaarxiv icon

FedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning

Add code
Mar 10, 2024
Viaarxiv icon

Chain-of-Specificity: An Iteratively Refining Method for Eliciting Knowledge from Large Language Models

Add code
Feb 20, 2024
Figure 1 for Chain-of-Specificity: An Iteratively Refining Method for Eliciting Knowledge from Large Language Models
Figure 2 for Chain-of-Specificity: An Iteratively Refining Method for Eliciting Knowledge from Large Language Models
Figure 3 for Chain-of-Specificity: An Iteratively Refining Method for Eliciting Knowledge from Large Language Models
Figure 4 for Chain-of-Specificity: An Iteratively Refining Method for Eliciting Knowledge from Large Language Models
Viaarxiv icon

RBSR: Efficient and Flexible Recurrent Network for Burst Super-Resolution

Add code
Jun 30, 2023
Viaarxiv icon