Picture for Nam Sung Kim

Nam Sung Kim

Transforming the Hybrid Cloud for Emerging AI Workloads

Add code
Nov 20, 2024
Viaarxiv icon

Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation

Add code
Feb 03, 2023
Viaarxiv icon

BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling

Add code
Mar 26, 2022
Figure 1 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 2 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 3 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Figure 4 for BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling
Viaarxiv icon

PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication

Add code
Mar 20, 2022
Figure 1 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 2 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 3 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Figure 4 for PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication
Viaarxiv icon

Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers

Add code
Feb 02, 2022
Figure 1 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Figure 2 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Figure 3 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Figure 4 for Harmony: Overcoming the hurdles of GPU memory capacity to train massive DNN models on commodity servers
Viaarxiv icon

Bit-Parallel Vector Composability for Neural Acceleration

Add code
Apr 11, 2020
Figure 1 for Bit-Parallel Vector Composability for Neural Acceleration
Figure 2 for Bit-Parallel Vector Composability for Neural Acceleration
Figure 3 for Bit-Parallel Vector Composability for Neural Acceleration
Figure 4 for Bit-Parallel Vector Composability for Neural Acceleration
Viaarxiv icon

Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training

Add code
Nov 08, 2018
Figure 1 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 2 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 3 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Figure 4 for Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training
Viaarxiv icon

GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training

Add code
Nov 08, 2018
Figure 1 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 2 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 3 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Figure 4 for GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training
Viaarxiv icon

GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks

Add code
May 10, 2018
Figure 1 for GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks
Figure 2 for GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks
Figure 3 for GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks
Figure 4 for GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks
Viaarxiv icon