Picture for Mahdi Nikdan

Mahdi Nikdan

Efficient Data Selection at Scale via Influence Distillation

Add code
May 25, 2025
Viaarxiv icon

Quartet: Native FP4 Training Can Be Optimal for Large Language Models

Add code
May 20, 2025
Viaarxiv icon

QuEST: Stable Training of LLMs with 1-Bit Weights and Activations

Add code
Feb 07, 2025
Figure 1 for QuEST: Stable Training of LLMs with 1-Bit Weights and Activations
Figure 2 for QuEST: Stable Training of LLMs with 1-Bit Weights and Activations
Figure 3 for QuEST: Stable Training of LLMs with 1-Bit Weights and Activations
Figure 4 for QuEST: Stable Training of LLMs with 1-Bit Weights and Activations
Viaarxiv icon

HALO: Hadamard-Assisted Lossless Optimization for Efficient Low-Precision LLM Training and Fine-Tuning

Add code
Jan 05, 2025
Figure 1 for HALO: Hadamard-Assisted Lossless Optimization for Efficient Low-Precision LLM Training and Fine-Tuning
Figure 2 for HALO: Hadamard-Assisted Lossless Optimization for Efficient Low-Precision LLM Training and Fine-Tuning
Figure 3 for HALO: Hadamard-Assisted Lossless Optimization for Efficient Low-Precision LLM Training and Fine-Tuning
Figure 4 for HALO: Hadamard-Assisted Lossless Optimization for Efficient Low-Precision LLM Training and Fine-Tuning
Viaarxiv icon

RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation

Add code
Jan 12, 2024
Viaarxiv icon

SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks

Add code
Feb 09, 2023
Viaarxiv icon