Picture for Chenyu Zhang

Chenyu Zhang

SCAP: Transductive Test-Time Adaptation via Supportive Clique-based Attribute Prompting

Add code
Mar 17, 2025
Viaarxiv icon

Topology-Preserving Loss for Accurate and Anatomically Consistent Cardiac Mesh Reconstruction

Add code
Mar 10, 2025
Viaarxiv icon

TRCE: Towards Reliable Malicious Concept Erasure in Text-to-Image Diffusion Models

Add code
Mar 10, 2025
Viaarxiv icon

In-Context Meta LoRA Generation

Add code
Jan 30, 2025
Figure 1 for In-Context Meta LoRA Generation
Figure 2 for In-Context Meta LoRA Generation
Figure 3 for In-Context Meta LoRA Generation
Figure 4 for In-Context Meta LoRA Generation
Viaarxiv icon

DeepSeek-V3 Technical Report

Add code
Dec 27, 2024
Figure 1 for DeepSeek-V3 Technical Report
Figure 2 for DeepSeek-V3 Technical Report
Figure 3 for DeepSeek-V3 Technical Report
Figure 4 for DeepSeek-V3 Technical Report
Viaarxiv icon

HoVLE: Unleashing the Power of Monolithic Vision-Language Models with Holistic Vision-Language Embedding

Add code
Dec 20, 2024
Viaarxiv icon

Self-supervised Monocular Depth and Pose Estimation for Endoscopy with Generative Latent Priors

Add code
Nov 26, 2024
Figure 1 for Self-supervised Monocular Depth and Pose Estimation for Endoscopy with Generative Latent Priors
Figure 2 for Self-supervised Monocular Depth and Pose Estimation for Endoscopy with Generative Latent Priors
Figure 3 for Self-supervised Monocular Depth and Pose Estimation for Endoscopy with Generative Latent Priors
Figure 4 for Self-supervised Monocular Depth and Pose Estimation for Endoscopy with Generative Latent Priors
Viaarxiv icon

Explanation for Trajectory Planning using Multi-modal Large Language Model for Autonomous Driving

Add code
Nov 15, 2024
Viaarxiv icon

GWQ: Gradient-Aware Weight Quantization for Large Language Models

Add code
Oct 30, 2024
Figure 1 for GWQ: Gradient-Aware Weight Quantization for Large Language Models
Figure 2 for GWQ: Gradient-Aware Weight Quantization for Large Language Models
Figure 3 for GWQ: Gradient-Aware Weight Quantization for Large Language Models
Figure 4 for GWQ: Gradient-Aware Weight Quantization for Large Language Models
Viaarxiv icon

Preserving Cardiac Integrity: A Topology-Infused Approach to Whole Heart Segmentation

Add code
Oct 14, 2024
Viaarxiv icon