Abstract:Improving performance of deep learning models and reducing their training times are ongoing challenges in deep neural networks. There are several approaches proposed to address these challenges one of which is to increase the depth of the neural networks. Such deeper networks not only increase training times, but also suffer from vanishing gradients problem while training. In this work, we propose gradient amplification approach for training deep learning models to prevent vanishing gradients and also develop a training strategy to enable or disable gradient amplification method across several epochs with different learning rates. We perform experiments on VGG-19 and resnet (Resnet-18 and Resnet-34) models, and study the impact of amplification parameters on these models in detail. Our proposed approach improves performance of these deep learning models even at higher learning rates, thereby allowing these models to achieve higher performance with reduced training time.
Abstract:Wakeup is the primary function in voice interaction which is the mainstream scheme in man-machine interaction (HMI) applications for smart home. All devices will response if the same wake-up word is used for all devices. This will bring chaos and reduce user quality of experience (QoE). The only way to solve this problem is to make all the devices in the same wireless local area network (WLAN) competing to wake-up based on the same scoring rule. The one closest to the user would be selected for response. To this end, a competitive wakeup scheme is proposed in this paper with elaborately designed calibration method for receiving energy of microphones. Moreover, the user orientation is assisted to determine the optimal device. Experiments reveal the feasibility and validity of this scheme.