Abstract:Partitioning and distributing deep neural networks (DNNs) across end-devices, edge resources and the cloud has a potential twofold advantage: preserving privacy of the input data, and reducing the ingress bandwidth demand beyond the edge. However, for a given DNN, identifying the optimal partition configuration for distributing the DNN that maximizes performance is a significant challenge since: (i) the combination of potential target hardware resources that maximizes performance and (ii) the sequence of layers of the DNN that should be distributed across the target resources needs to be determined, while accounting for (iii) user-defined objectives/constraints for partitioning. This paper presents Scission, a tool for automated benchmarking of DNNs on a given set of target device, edge and cloud resources for determining optimal partitions that maximize DNN performance. The decision-making approach is context-aware by capitalizing on hardware capabilities of the target resources, their locality, the characteristics of DNN layers, and the network condition. Experimental studies are carried out on 18 DNNs. The decisions made by Scission cannot be manually made by a human given the complexity and the number of dimensions affecting the search space. The results obtained validate that Scission is a valuable tool for achieving performance-driven and context-aware distributed DNNs that leverage the edge. Scission is available for public download at https://github.com/qub-blesson/Scission.