Picture for Bojian Zheng

Bojian Zheng

Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction

Add code
Oct 19, 2022
Figure 1 for Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction
Figure 2 for Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction
Figure 3 for Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction
Figure 4 for Tempo: Accelerating Transformer-Based Model Training through Memory Footprint Reduction
Viaarxiv icon

Hidet: Task Mapping Programming Paradigm for Deep Learning Tensor Programs

Add code
Oct 18, 2022
Figure 1 for Hidet: Task Mapping Programming Paradigm for Deep Learning Tensor Programs
Figure 2 for Hidet: Task Mapping Programming Paradigm for Deep Learning Tensor Programs
Figure 3 for Hidet: Task Mapping Programming Paradigm for Deep Learning Tensor Programs
Figure 4 for Hidet: Task Mapping Programming Paradigm for Deep Learning Tensor Programs
Viaarxiv icon

EcoRNN: Fused LSTM RNN Implementation with Data Layout Optimization

Add code
May 22, 2018
Figure 1 for EcoRNN: Fused LSTM RNN Implementation with Data Layout Optimization
Figure 2 for EcoRNN: Fused LSTM RNN Implementation with Data Layout Optimization
Figure 3 for EcoRNN: Fused LSTM RNN Implementation with Data Layout Optimization
Figure 4 for EcoRNN: Fused LSTM RNN Implementation with Data Layout Optimization
Viaarxiv icon

TBD: Benchmarking and Analyzing Deep Neural Network Training

Add code
Apr 14, 2018
Figure 1 for TBD: Benchmarking and Analyzing Deep Neural Network Training
Figure 2 for TBD: Benchmarking and Analyzing Deep Neural Network Training
Figure 3 for TBD: Benchmarking and Analyzing Deep Neural Network Training
Figure 4 for TBD: Benchmarking and Analyzing Deep Neural Network Training
Viaarxiv icon