Picture for Mingchao Yu

Mingchao Yu

Train Where the Data is: A Case for Bandwidth Efficient Coded Training

Add code
Oct 22, 2019
Figure 1 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Figure 2 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Figure 3 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Figure 4 for Train Where the Data is: A Case for Bandwidth Efficient Coded Training
Viaarxiv icon

Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

Add code
Nov 08, 2018
Figure 1 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 2 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 3 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 4 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Viaarxiv icon

GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training

Add code
Nov 08, 2018
Figure 1 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 2 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 3 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 4 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Viaarxiv icon