Picture for Jianbin Fang

Jianbin Fang

Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach

Add code
Mar 05, 2020
Figure 1 for Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach
Figure 2 for Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach
Figure 3 for Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach
Figure 4 for Optimizing Streaming Parallelism on Heterogeneous Many-Core Architectures: A Machine Learning Based Approach
Viaarxiv icon

Characterizing Scalability of Sparse Matrix-Vector Multiplications on Phytium FT-2000+ Many-cores

Add code
Nov 20, 2019
Figure 1 for Characterizing Scalability of Sparse Matrix-Vector Multiplications on Phytium FT-2000+ Many-cores
Figure 2 for Characterizing Scalability of Sparse Matrix-Vector Multiplications on Phytium FT-2000+ Many-cores
Figure 3 for Characterizing Scalability of Sparse Matrix-Vector Multiplications on Phytium FT-2000+ Many-cores
Figure 4 for Characterizing Scalability of Sparse Matrix-Vector Multiplications on Phytium FT-2000+ Many-cores
Viaarxiv icon

To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference

Add code
Oct 21, 2018
Figure 1 for To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Figure 2 for To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Figure 3 for To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Figure 4 for To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
Viaarxiv icon