Picture for Qiang Hu

Qiang Hu

Holistic White-light Polyp Classification via Alignment-free Dense Distillation of Auxiliary Optical Chromoendoscopy

Add code
May 25, 2025
Viaarxiv icon

NTIRE 2025 challenge on Text to Image Generation Model Quality Assessment

Add code
May 22, 2025
Viaarxiv icon

Controllable Image Colorization with Instance-aware Texts and Masks

Add code
May 13, 2025
Viaarxiv icon

FineVQ: Fine-Grained User Generated Content Video Quality Assessment

Add code
Dec 26, 2024
Figure 1 for FineVQ: Fine-Grained User Generated Content Video Quality Assessment
Figure 2 for FineVQ: Fine-Grained User Generated Content Video Quality Assessment
Figure 3 for FineVQ: Fine-Grained User Generated Content Video Quality Assessment
Figure 4 for FineVQ: Fine-Grained User Generated Content Video Quality Assessment
Viaarxiv icon

First-frame Supervised Video Polyp Segmentation via Propagative and Semantic Dual-teacher Network

Add code
Dec 21, 2024
Viaarxiv icon

F-Bench: Rethinking Human Preference Evaluation Metrics for Benchmarking Face Generation, Customization, and Restoration

Add code
Dec 17, 2024
Viaarxiv icon

VRVVC: Variable-Rate NeRF-Based Volumetric Video Compression

Add code
Dec 16, 2024
Figure 1 for VRVVC: Variable-Rate NeRF-Based Volumetric Video Compression
Figure 2 for VRVVC: Variable-Rate NeRF-Based Volumetric Video Compression
Figure 3 for VRVVC: Variable-Rate NeRF-Based Volumetric Video Compression
Figure 4 for VRVVC: Variable-Rate NeRF-Based Volumetric Video Compression
Viaarxiv icon

Large Language Model Supply Chain: Open Problems From the Security Perspective

Add code
Nov 03, 2024
Viaarxiv icon

ChineseSafe: A Chinese Benchmark for Evaluating Safety in Large Language Models

Add code
Oct 24, 2024
Figure 1 for ChineseSafe: A Chinese Benchmark for Evaluating Safety in Large Language Models
Figure 2 for ChineseSafe: A Chinese Benchmark for Evaluating Safety in Large Language Models
Figure 3 for ChineseSafe: A Chinese Benchmark for Evaluating Safety in Large Language Models
Figure 4 for ChineseSafe: A Chinese Benchmark for Evaluating Safety in Large Language Models
Viaarxiv icon

Defending Membership Inference Attacks via Privacy-aware Sparsity Tuning

Add code
Oct 09, 2024
Figure 1 for Defending Membership Inference Attacks via Privacy-aware Sparsity Tuning
Figure 2 for Defending Membership Inference Attacks via Privacy-aware Sparsity Tuning
Figure 3 for Defending Membership Inference Attacks via Privacy-aware Sparsity Tuning
Figure 4 for Defending Membership Inference Attacks via Privacy-aware Sparsity Tuning
Viaarxiv icon