Picture for Hongxu Yin

Hongxu Yin

NaVILA: Legged Robot Vision-Language-Action Model for Navigation

Add code
Dec 05, 2024
Viaarxiv icon

NVILA: Efficient Frontier Visual Language Models

Add code
Dec 05, 2024
Figure 1 for NVILA: Efficient Frontier Visual Language Models
Figure 2 for NVILA: Efficient Frontier Visual Language Models
Figure 3 for NVILA: Efficient Frontier Visual Language Models
Figure 4 for NVILA: Efficient Frontier Visual Language Models
Viaarxiv icon

VILA-M3: Enhancing Vision-Language Models with Medical Expert Knowledge

Add code
Nov 19, 2024
Figure 1 for VILA-M3: Enhancing Vision-Language Models with Medical Expert Knowledge
Figure 2 for VILA-M3: Enhancing Vision-Language Models with Medical Expert Knowledge
Figure 3 for VILA-M3: Enhancing Vision-Language Models with Medical Expert Knowledge
Figure 4 for VILA-M3: Enhancing Vision-Language Models with Medical Expert Knowledge
Viaarxiv icon

EoRA: Training-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation

Add code
Oct 28, 2024
Viaarxiv icon

MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models

Add code
Sep 26, 2024
Figure 1 for MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models
Figure 2 for MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models
Figure 3 for MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models
Figure 4 for MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models
Viaarxiv icon

VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation

Add code
Sep 06, 2024
Figure 1 for VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
Figure 2 for VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
Figure 3 for VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
Figure 4 for VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
Viaarxiv icon

Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders

Add code
Aug 28, 2024
Figure 1 for Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders
Figure 2 for Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders
Figure 3 for Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders
Figure 4 for Eagle: Exploring The Design Space for Multimodal LLMs with Mixture of Encoders
Viaarxiv icon

LongVILA: Scaling Long-Context Visual Language Models for Long Videos

Add code
Aug 21, 2024
Figure 1 for LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Figure 2 for LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Figure 3 for LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Figure 4 for LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Viaarxiv icon

$VILA^2$: VILA Augmented VILA

Add code
Jul 24, 2024
Viaarxiv icon

Flextron: Many-in-One Flexible Large Language Model

Add code
Jun 11, 2024
Figure 1 for Flextron: Many-in-One Flexible Large Language Model
Figure 2 for Flextron: Many-in-One Flexible Large Language Model
Figure 3 for Flextron: Many-in-One Flexible Large Language Model
Figure 4 for Flextron: Many-in-One Flexible Large Language Model
Viaarxiv icon