Picture for Dhabaleswar K. Panda

Dhabaleswar K. Panda

Accelerating Large Language Model Training with Hybrid GPU-based Compression

Add code
Sep 04, 2024
Figure 1 for Accelerating Large Language Model Training with Hybrid GPU-based Compression
Figure 2 for Accelerating Large Language Model Training with Hybrid GPU-based Compression
Figure 3 for Accelerating Large Language Model Training with Hybrid GPU-based Compression
Figure 4 for Accelerating Large Language Model Training with Hybrid GPU-based Compression
Viaarxiv icon

Training Ultra Long Context Language Model with Fully Pipelined Distributed Transformer

Add code
Aug 30, 2024
Figure 1 for Training Ultra Long Context Language Model with Fully Pipelined Distributed Transformer
Figure 2 for Training Ultra Long Context Language Model with Fully Pipelined Distributed Transformer
Figure 3 for Training Ultra Long Context Language Model with Fully Pipelined Distributed Transformer
Figure 4 for Training Ultra Long Context Language Model with Fully Pipelined Distributed Transformer
Viaarxiv icon

Efficient MPI-based Communication for GPU-Accelerated Dask Applications

Add code
Jan 21, 2021
Figure 1 for Efficient MPI-based Communication for GPU-Accelerated Dask Applications
Figure 2 for Efficient MPI-based Communication for GPU-Accelerated Dask Applications
Figure 3 for Efficient MPI-based Communication for GPU-Accelerated Dask Applications
Figure 4 for Efficient MPI-based Communication for GPU-Accelerated Dask Applications
Viaarxiv icon