Picture for Aditya Devarakonda

Aditya Devarakonda

Scalable Dual Coordinate Descent for Kernel Methods

Add code
Jun 26, 2024
Viaarxiv icon

Sequential and Shared-Memory Parallel Algorithms for Partitioned Local Depths

Add code
Jul 31, 2023
Viaarxiv icon

Avoiding Communication in Logistic Regression

Add code
Nov 16, 2020
Figure 1 for Avoiding Communication in Logistic Regression
Figure 2 for Avoiding Communication in Logistic Regression
Figure 3 for Avoiding Communication in Logistic Regression
Figure 4 for Avoiding Communication in Logistic Regression
Viaarxiv icon

AdaBatch: Adaptive Batch Sizes for Training Deep Neural Networks

Add code
Feb 14, 2018
Figure 1 for AdaBatch: Adaptive Batch Sizes for Training Deep Neural Networks
Figure 2 for AdaBatch: Adaptive Batch Sizes for Training Deep Neural Networks
Figure 3 for AdaBatch: Adaptive Batch Sizes for Training Deep Neural Networks
Figure 4 for AdaBatch: Adaptive Batch Sizes for Training Deep Neural Networks
Viaarxiv icon

Avoiding Synchronization in First-Order Methods for Sparse Convex Optimization

Add code
Dec 17, 2017
Figure 1 for Avoiding Synchronization in First-Order Methods for Sparse Convex Optimization
Figure 2 for Avoiding Synchronization in First-Order Methods for Sparse Convex Optimization
Figure 3 for Avoiding Synchronization in First-Order Methods for Sparse Convex Optimization
Figure 4 for Avoiding Synchronization in First-Order Methods for Sparse Convex Optimization
Viaarxiv icon

Avoiding Communication in Proximal Methods for Convex Optimization Problems

Add code
Oct 24, 2017
Figure 1 for Avoiding Communication in Proximal Methods for Convex Optimization Problems
Figure 2 for Avoiding Communication in Proximal Methods for Convex Optimization Problems
Figure 3 for Avoiding Communication in Proximal Methods for Convex Optimization Problems
Figure 4 for Avoiding Communication in Proximal Methods for Convex Optimization Problems
Viaarxiv icon