Picture for Keshi Ge

Keshi Ge

Merak: An Efficient Distributed DNN Training Framework with Automated 3D Parallelism for Giant Foundation Models

Add code
Jun 21, 2022
Figure 1 for Merak: An Efficient Distributed DNN Training Framework with Automated 3D Parallelism for Giant Foundation Models
Figure 2 for Merak: An Efficient Distributed DNN Training Framework with Automated 3D Parallelism for Giant Foundation Models
Figure 3 for Merak: An Efficient Distributed DNN Training Framework with Automated 3D Parallelism for Giant Foundation Models
Figure 4 for Merak: An Efficient Distributed DNN Training Framework with Automated 3D Parallelism for Giant Foundation Models
Viaarxiv icon

S2 Reducer: High-Performance Sparse Communication to Accelerate Distributed Deep Learning

Add code
Oct 05, 2021
Figure 1 for S2 Reducer: High-Performance Sparse Communication to Accelerate Distributed Deep Learning
Figure 2 for S2 Reducer: High-Performance Sparse Communication to Accelerate Distributed Deep Learning
Figure 3 for S2 Reducer: High-Performance Sparse Communication to Accelerate Distributed Deep Learning
Figure 4 for S2 Reducer: High-Performance Sparse Communication to Accelerate Distributed Deep Learning
Viaarxiv icon

An Efficient ADMM-Based Algorithm to Nonconvex Penalized Support Vector Machines

Add code
Sep 11, 2018
Figure 1 for An Efficient ADMM-Based Algorithm to Nonconvex Penalized Support Vector Machines
Figure 2 for An Efficient ADMM-Based Algorithm to Nonconvex Penalized Support Vector Machines
Figure 3 for An Efficient ADMM-Based Algorithm to Nonconvex Penalized Support Vector Machines
Figure 4 for An Efficient ADMM-Based Algorithm to Nonconvex Penalized Support Vector Machines
Viaarxiv icon