Picture for Youjie Li

Youjie Li

BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling

Add code
Mar 26, 2022
Figure 1 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 2 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 3 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 4 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Viaarxiv icon

PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication

Add code
Mar 20, 2022
Figure 1 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 2 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 3 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 4 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Viaarxiv icon

Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers

Add code
Feb 02, 2022
Figure 1 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Figure 2 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Figure 3 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Figure 4 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Viaarxiv icon

Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

Add code
Nov 08, 2018
Figure 1 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 2 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 3 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 4 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Viaarxiv icon

GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training

Add code
Nov 08, 2018
Figure 1 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 2 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 3 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 4 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Viaarxiv icon