Abstract:Deep Neural Networks (DNNs) have shown great success in completing complex tasks. However, DNNs inevitably bring high computational cost and storage consumption due to the complexity of hierarchical structures, thereby hindering their wide deployment in Internet-of-Things (IoT) devices, which have limited computational capability and storage capacity. Therefore, it is a necessity to investigate the technologies to compact DNNs. Despite tremendous advances in compacting DNNs, few surveys summarize compacting-DNNs technologies, especially for IoT applications. Hence, this paper presents a comprehensive study on compacting-DNNs technologies. We categorize compacting-DNNs technologies into three major types: 1) network model compression, 2) Knowledge Distillation (KD), 3) modification of network structures. We also elaborate on the diversity of these approaches and make side-by-side comparisons. Moreover, we discuss the applications of compacted DNNs in various IoT applications and outline future directions.