Picture for Saugata Ghose

Saugata Ghose

Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud

Add code
Sep 19, 2022
Figure 1 for Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud
Figure 2 for Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud
Figure 3 for Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud
Figure 4 for Accelerating Neural Network Inference with Processing-in-DRAM: From the Edge to the Cloud
Viaarxiv icon

Heterogeneous Data-Centric Architectures for Modern Data-Intensive Applications: Case Studies in Machine Learning and Databases

Add code
May 29, 2022
Figure 1 for Heterogeneous Data-Centric Architectures for Modern Data-Intensive Applications: Case Studies in Machine Learning and Databases
Viaarxiv icon

Google Neural Network Models for Edge Devices: Analyzing and Mitigating Machine Learning Inference Bottlenecks

Add code
Sep 29, 2021
Figure 1 for Google Neural Network Models for Edge Devices: Analyzing and Mitigating Machine Learning Inference Bottlenecks
Figure 2 for Google Neural Network Models for Edge Devices: Analyzing and Mitigating Machine Learning Inference Bottlenecks
Figure 3 for Google Neural Network Models for Edge Devices: Analyzing and Mitigating Machine Learning Inference Bottlenecks
Figure 4 for Google Neural Network Models for Edge Devices: Analyzing and Mitigating Machine Learning Inference Bottlenecks
Viaarxiv icon

Mitigating Edge Machine Learning Inference Bottlenecks: An Empirical Study on Accelerating Google Edge Models

Add code
Mar 01, 2021
Figure 1 for Mitigating Edge Machine Learning Inference Bottlenecks: An Empirical Study on Accelerating Google Edge Models
Figure 2 for Mitigating Edge Machine Learning Inference Bottlenecks: An Empirical Study on Accelerating Google Edge Models
Figure 3 for Mitigating Edge Machine Learning Inference Bottlenecks: An Empirical Study on Accelerating Google Edge Models
Figure 4 for Mitigating Edge Machine Learning Inference Bottlenecks: An Empirical Study on Accelerating Google Edge Models
Viaarxiv icon