Picture for Pan Zhou

Pan Zhou

The Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong University of Science and Technology

CaPo: Cooperative Plan Optimization for Efficient Embodied Multi-Agent Cooperation

Add code
Nov 07, 2024
Viaarxiv icon

Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector

Add code
Oct 30, 2024
Viaarxiv icon

Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation

Add code
Oct 29, 2024
Viaarxiv icon

Two are better than one: Context window extension with multi-grained self-injection

Add code
Oct 25, 2024
Viaarxiv icon

Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning

Add code
Oct 15, 2024
Viaarxiv icon

SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning

Add code
Oct 11, 2024
Figure 1 for SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning
Figure 2 for SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning
Figure 3 for SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning
Figure 4 for SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning
Viaarxiv icon

Towards Natural Image Matting in the Wild via Real-Scenario Prior

Add code
Oct 09, 2024
Figure 1 for Towards Natural Image Matting in the Wild via Real-Scenario Prior
Figure 2 for Towards Natural Image Matting in the Wild via Real-Scenario Prior
Figure 3 for Towards Natural Image Matting in the Wild via Real-Scenario Prior
Figure 4 for Towards Natural Image Matting in the Wild via Real-Scenario Prior
Viaarxiv icon

The Impact of Large Language Models in Academia: from Writing to Speaking

Add code
Sep 20, 2024
Viaarxiv icon

LPT++: Efficient Training on Mixture of Long-tailed Experts

Add code
Sep 17, 2024
Viaarxiv icon

MoExtend: Tuning New Experts for Modality and Task Extension

Add code
Aug 07, 2024
Figure 1 for MoExtend: Tuning New Experts for Modality and Task Extension
Figure 2 for MoExtend: Tuning New Experts for Modality and Task Extension
Figure 3 for MoExtend: Tuning New Experts for Modality and Task Extension
Figure 4 for MoExtend: Tuning New Experts for Modality and Task Extension
Viaarxiv icon