Picture for Pavlo Molchanov

Pavlo Molchanov

Efficient Hybrid Language Model Compression through Group-Aware SSM Pruning

Add code
Apr 15, 2025
Viaarxiv icon

Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models

Add code
Apr 10, 2025
Viaarxiv icon

Scaling Vision Pre-Training to 4K Resolution

Add code
Mar 25, 2025
Viaarxiv icon

TwinTURBO: Semi-Supervised Fine-Tuning of Foundation Models via Mutual Information Decompositions for Downstream Task and Latent Spaces

Add code
Mar 10, 2025
Viaarxiv icon

FeatSharp: Your Vision Model Features, Sharper

Add code
Feb 22, 2025
Viaarxiv icon

Advancing Weight and Channel Sparsification with Enhanced Saliency

Add code
Feb 05, 2025
Figure 1 for Advancing Weight and Channel Sparsification with Enhanced Saliency
Figure 2 for Advancing Weight and Channel Sparsification with Enhanced Saliency
Figure 3 for Advancing Weight and Channel Sparsification with Enhanced Saliency
Figure 4 for Advancing Weight and Channel Sparsification with Enhanced Saliency
Viaarxiv icon

Entropy-Regularized Process Reward Model

Add code
Dec 15, 2024
Viaarxiv icon

RADIO Amplified: Improved Baselines for Agglomerative Vision Foundation Models

Add code
Dec 10, 2024
Viaarxiv icon

NVILA: Efficient Frontier Visual Language Models

Add code
Dec 05, 2024
Figure 1 for NVILA: Efficient Frontier Visual Language Models
Figure 2 for NVILA: Efficient Frontier Visual Language Models
Figure 3 for NVILA: Efficient Frontier Visual Language Models
Figure 4 for NVILA: Efficient Frontier Visual Language Models
Viaarxiv icon

Puzzle: Distillation-Based NAS for Inference-Optimized LLMs

Add code
Dec 03, 2024
Figure 1 for Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
Figure 2 for Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
Figure 3 for Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
Figure 4 for Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
Viaarxiv icon