Picture for Yunyang Xiong

Yunyang Xiong

EdgeTAM: On-Device Track Anything Model

Add code
Jan 13, 2025
Viaarxiv icon

MetaMorph: Multimodal Understanding and Generation via Instruction Tuning

Add code
Dec 18, 2024
Viaarxiv icon

Efficient Track Anything

Add code
Nov 28, 2024
Viaarxiv icon

LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding

Add code
Oct 22, 2024
Viaarxiv icon

Agent-as-a-Judge: Evaluate Agents with Agents

Add code
Oct 14, 2024
Figure 1 for Agent-as-a-Judge: Evaluate Agents with Agents
Figure 2 for Agent-as-a-Judge: Evaluate Agents with Agents
Figure 3 for Agent-as-a-Judge: Evaluate Agents with Agents
Figure 4 for Agent-as-a-Judge: Evaluate Agents with Agents
Viaarxiv icon

An Introduction to Vision-Language Modeling

Add code
May 27, 2024
Figure 1 for An Introduction to Vision-Language Modeling
Figure 2 for An Introduction to Vision-Language Modeling
Figure 3 for An Introduction to Vision-Language Modeling
Viaarxiv icon

MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases

Add code
Feb 22, 2024
Viaarxiv icon

SqueezeSAM: User friendly mobile interactive segmentation

Add code
Dec 11, 2023
Viaarxiv icon

EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything

Add code
Dec 01, 2023
Viaarxiv icon

MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning

Add code
Oct 26, 2023
Figure 1 for MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning
Figure 2 for MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning
Figure 3 for MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning
Figure 4 for MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning
Viaarxiv icon