Picture for Yuhang Zang

Yuhang Zang

InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions

Add code
Dec 12, 2024
Viaarxiv icon

X-Prompt: Towards Universal In-Context Image Generation in Auto-Regressive Vision Language Foundation Models

Add code
Dec 02, 2024
Viaarxiv icon

MIA-DPO: Multi-Image Augmented Direct Preference Optimization For Large Vision-Language Models

Add code
Oct 23, 2024
Figure 1 for MIA-DPO: Multi-Image Augmented Direct Preference Optimization For Large Vision-Language Models
Figure 2 for MIA-DPO: Multi-Image Augmented Direct Preference Optimization For Large Vision-Language Models
Figure 3 for MIA-DPO: Multi-Image Augmented Direct Preference Optimization For Large Vision-Language Models
Figure 4 for MIA-DPO: Multi-Image Augmented Direct Preference Optimization For Large Vision-Language Models
Viaarxiv icon

PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction

Add code
Oct 22, 2024
Viaarxiv icon

SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree

Add code
Oct 21, 2024
Viaarxiv icon

Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate

Add code
Oct 09, 2024
Viaarxiv icon

BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way

Add code
Oct 08, 2024
Figure 1 for BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way
Figure 2 for BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way
Figure 3 for BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way
Figure 4 for BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way
Viaarxiv icon

VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models

Add code
Jul 16, 2024
Figure 1 for VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models
Figure 2 for VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models
Figure 3 for VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models
Figure 4 for VLMEvalKit: An Open-Source Toolkit for Evaluating Large Multi-Modality Models
Viaarxiv icon

InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output

Add code
Jul 03, 2024
Figure 1 for InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Figure 2 for InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Figure 3 for InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Figure 4 for InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Viaarxiv icon

WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation

Add code
Jul 02, 2024
Figure 1 for WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation
Figure 2 for WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation
Figure 3 for WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation
Figure 4 for WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation
Viaarxiv icon