Picture for Marcin Chochowski

Marcin Chochowski

LLM Pruning and Distillation in Practice: The Minitron Approach

Add code
Aug 21, 2024
Figure 1 for LLM Pruning and Distillation in Practice: The Minitron Approach
Figure 2 for LLM Pruning and Distillation in Practice: The Minitron Approach
Figure 3 for LLM Pruning and Distillation in Practice: The Minitron Approach
Figure 4 for LLM Pruning and Distillation in Practice: The Minitron Approach
Viaarxiv icon

Compact Language Models via Pruning and Knowledge Distillation

Add code
Jul 19, 2024
Figure 1 for Compact Language Models via Pruning and Knowledge Distillation
Figure 2 for Compact Language Models via Pruning and Knowledge Distillation
Figure 3 for Compact Language Models via Pruning and Knowledge Distillation
Figure 4 for Compact Language Models via Pruning and Knowledge Distillation
Viaarxiv icon

Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference

Add code
Mar 14, 2024
Figure 1 for Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference
Figure 2 for Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference
Figure 3 for Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference
Figure 4 for Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference
Viaarxiv icon