Picture for Tomoya Murata

Tomoya Murata

DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning

Add code
Feb 08, 2023
Viaarxiv icon

Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning

Add code
Feb 12, 2022
Figure 1 for Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning
Viaarxiv icon

Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning

Add code
Feb 05, 2021
Figure 1 for Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning
Figure 2 for Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning
Figure 3 for Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning
Figure 4 for Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning
Viaarxiv icon

Gradient Descent in RKHS with Importance Labeling

Add code
Jun 19, 2020
Figure 1 for Gradient Descent in RKHS with Importance Labeling
Figure 2 for Gradient Descent in RKHS with Importance Labeling
Viaarxiv icon

Accelerated Sparsified SGD with Error Feedback

Add code
May 29, 2019
Figure 1 for Accelerated Sparsified SGD with Error Feedback
Figure 2 for Accelerated Sparsified SGD with Error Feedback
Viaarxiv icon

Sample Efficient Stochastic Gradient Iterative Hard Thresholding Method for Stochastic Sparse Linear Regression with Limited Attribute Observation

Add code
Sep 05, 2018
Figure 1 for Sample Efficient Stochastic Gradient Iterative Hard Thresholding Method for Stochastic Sparse Linear Regression with Limited Attribute Observation
Figure 2 for Sample Efficient Stochastic Gradient Iterative Hard Thresholding Method for Stochastic Sparse Linear Regression with Limited Attribute Observation
Viaarxiv icon

Spectral-Pruning: Compressing deep neural network via spectral analysis

Add code
Aug 26, 2018
Figure 1 for Spectral-Pruning: Compressing deep neural network via spectral analysis
Figure 2 for Spectral-Pruning: Compressing deep neural network via spectral analysis
Figure 3 for Spectral-Pruning: Compressing deep neural network via spectral analysis
Figure 4 for Spectral-Pruning: Compressing deep neural network via spectral analysis
Viaarxiv icon

Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization

Add code
Sep 19, 2017
Figure 1 for Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization
Figure 2 for Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization
Figure 3 for Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization
Viaarxiv icon

Stochastic dual averaging methods using variance reduction techniques for regularized empirical risk minimization problems

Add code
Mar 08, 2016
Figure 1 for Stochastic dual averaging methods using variance reduction techniques for regularized empirical risk minimization problems
Figure 2 for Stochastic dual averaging methods using variance reduction techniques for regularized empirical risk minimization problems
Figure 3 for Stochastic dual averaging methods using variance reduction techniques for regularized empirical risk minimization problems
Figure 4 for Stochastic dual averaging methods using variance reduction techniques for regularized empirical risk minimization problems
Viaarxiv icon