Picture for Mario Almeida

Mario Almeida

NAWQ-SR: A Hybrid-Precision NPU Engine for Efficient On-Device Super-Resolution

Add code
Dec 15, 2022
Viaarxiv icon

Smart at what cost? Characterising Mobile Deep Neural Networks in the wild

Add code
Sep 28, 2021
Figure 1 for Smart at what cost? Characterising Mobile Deep Neural Networks in the wild
Figure 2 for Smart at what cost? Characterising Mobile Deep Neural Networks in the wild
Figure 3 for Smart at what cost? Characterising Mobile Deep Neural Networks in the wild
Figure 4 for Smart at what cost? Characterising Mobile Deep Neural Networks in the wild
Viaarxiv icon

DynO: Dynamic Onloading of Deep Neural Networks from Cloud to Device

Add code
Apr 20, 2021
Figure 1 for DynO: Dynamic Onloading of Deep Neural Networks from Cloud to Device
Figure 2 for DynO: Dynamic Onloading of Deep Neural Networks from Cloud to Device
Figure 3 for DynO: Dynamic Onloading of Deep Neural Networks from Cloud to Device
Figure 4 for DynO: Dynamic Onloading of Deep Neural Networks from Cloud to Device
Viaarxiv icon

FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout

Add code
Mar 01, 2021
Figure 1 for FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
Figure 2 for FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
Figure 3 for FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
Figure 4 for FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
Viaarxiv icon

SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud

Add code
Aug 24, 2020
Figure 1 for SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud
Figure 2 for SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud
Figure 3 for SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud
Figure 4 for SPINN: Synergistic Progressive Inference of Neural Networks over Device and Cloud
Viaarxiv icon

EmBench: Quantifying Performance Variations of Deep Neural Networks across Modern Commodity Devices

Add code
May 17, 2019
Figure 1 for EmBench: Quantifying Performance Variations of Deep Neural Networks across Modern Commodity Devices
Figure 2 for EmBench: Quantifying Performance Variations of Deep Neural Networks across Modern Commodity Devices
Figure 3 for EmBench: Quantifying Performance Variations of Deep Neural Networks across Modern Commodity Devices
Figure 4 for EmBench: Quantifying Performance Variations of Deep Neural Networks across Modern Commodity Devices
Viaarxiv icon