Picture for Lu Hou

Lu Hou

Huawei Noah's Ark Lab

ILLUME: Illuminating Your LLMs to See, Draw, and Self-Enhance

Add code
Dec 09, 2024
Viaarxiv icon

FastAttention: Extend FlashAttention2 to NPUs and Low-resource GPUs

Add code
Oct 22, 2024
Viaarxiv icon

FlatQuant: Flatness Matters for LLM Quantization

Add code
Oct 12, 2024
Figure 1 for FlatQuant: Flatness Matters for LLM Quantization
Figure 2 for FlatQuant: Flatness Matters for LLM Quantization
Figure 3 for FlatQuant: Flatness Matters for LLM Quantization
Figure 4 for FlatQuant: Flatness Matters for LLM Quantization
Viaarxiv icon

EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions

Add code
Sep 26, 2024
Figure 1 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 2 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 3 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Figure 4 for EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
Viaarxiv icon

UNIT: Unifying Image and Text Recognition in One Vision Encoder

Add code
Sep 06, 2024
Figure 1 for UNIT: Unifying Image and Text Recognition in One Vision Encoder
Figure 2 for UNIT: Unifying Image and Text Recognition in One Vision Encoder
Figure 3 for UNIT: Unifying Image and Text Recognition in One Vision Encoder
Figure 4 for UNIT: Unifying Image and Text Recognition in One Vision Encoder
Viaarxiv icon

Embedding Compression in Recommender Systems: A Survey

Add code
Aug 05, 2024
Viaarxiv icon

HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models

Add code
Jul 11, 2024
Figure 1 for HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models
Figure 2 for HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models
Figure 3 for HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models
Figure 4 for HiRes-LLaVA: Restoring Fragmentation Input in High-Resolution Large Vision-Language Models
Viaarxiv icon

DeCo: Decoupling Token Compression from Semantic Abstraction in Multimodal Large Language Models

Add code
May 31, 2024
Figure 1 for DeCo: Decoupling Token Compression from Semantic Abstraction in Multimodal Large Language Models
Figure 2 for DeCo: Decoupling Token Compression from Semantic Abstraction in Multimodal Large Language Models
Figure 3 for DeCo: Decoupling Token Compression from Semantic Abstraction in Multimodal Large Language Models
Figure 4 for DeCo: Decoupling Token Compression from Semantic Abstraction in Multimodal Large Language Models
Viaarxiv icon

OAC: Output-adaptive Calibration for Accurate Post-training Quantization

Add code
May 23, 2024
Figure 1 for OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Figure 2 for OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Figure 3 for OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Figure 4 for OAC: Output-adaptive Calibration for Accurate Post-training Quantization
Viaarxiv icon

Towards Multimodal Video Paragraph Captioning Models Robust to Missing Modality

Add code
Mar 28, 2024
Viaarxiv icon