Picture for Mher Safaryan

Mher Safaryan

LDAdam: Adaptive Optimization from Low-Dimensional Gradient Statistics

Add code
Oct 21, 2024
Viaarxiv icon

The Iterative Optimal Brain Surgeon: Faster Sparse Recovery by Leveraging Second-Order Information

Add code
Aug 30, 2024
Viaarxiv icon

MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence

Add code
May 24, 2024
Viaarxiv icon

AsGrad: A Sharp Unified Analysis of Asynchronous-SGD Algorithms

Add code
Oct 31, 2023
Viaarxiv icon

Knowledge Distillation Performs Partial Variance Reduction

Add code
May 27, 2023
Viaarxiv icon

GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity

Add code
Oct 28, 2022
Viaarxiv icon

Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation

Add code
Jun 07, 2022
Figure 1 for Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Figure 2 for Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Figure 3 for Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Figure 4 for Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Viaarxiv icon

Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning

Add code
Nov 02, 2021
Figure 1 for Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Figure 2 for Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Figure 3 for Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Figure 4 for Basis Matters: Better Communication-Efficient Second Order Methods for Federated Learning
Viaarxiv icon

Smoothness-Aware Quantization Techniques

Add code
Jun 07, 2021
Figure 1 for Smoothness-Aware Quantization Techniques
Figure 2 for Smoothness-Aware Quantization Techniques
Figure 3 for Smoothness-Aware Quantization Techniques
Figure 4 for Smoothness-Aware Quantization Techniques
Viaarxiv icon

FedNL: Making Newton-Type Methods Applicable to Federated Learning

Add code
Jun 05, 2021
Figure 1 for FedNL: Making Newton-Type Methods Applicable to Federated Learning
Figure 2 for FedNL: Making Newton-Type Methods Applicable to Federated Learning
Figure 3 for FedNL: Making Newton-Type Methods Applicable to Federated Learning
Figure 4 for FedNL: Making Newton-Type Methods Applicable to Federated Learning
Viaarxiv icon