Abstract:We empirically investigate the impact of learning randomly generated labels in parallel to class labels in supervised learning on memorization, model complexity, and generalization in deep neural networks. To this end, we introduce a multi-head network architecture as an extension of standard CNN architectures. Inspired by methods used in fair AI, our approach allows for the unlearning of random labels, preventing the network from memorizing individual samples. Based on the concept of Rademacher complexity, we first use our proposed method as a complexity metric to analyze the effects of common regularization techniques and challenge the traditional understanding of feature extraction and classification in CNNs. Second, we propose a novel regularizer that effectively reduces sample memorization. However, contrary to the predictions of classical statistical learning theory, we do not observe improvements in generalization.
Abstract:The recently proposed optimization algorithm for deep neural networks Sharpness Aware Minimization (SAM) suggests perturbing parameters before gradient calculation by a gradient ascent step to guide the optimization into parameter space regions of flat loss. While significant generalization improvements and thus reduction of overfitting could be demonstrated, the computational costs are doubled due to the additionally needed gradient calculation, making SAM unfeasible in case of limited computationally capacities. Motivated by Nesterov Accelerated Gradient (NAG) we propose Momentum-SAM (MSAM), which perturbs parameters in the direction of the accumulated momentum vector to achieve low sharpness without significant computational overhead or memory demands over SGD or Adam. We evaluate MSAM in detail and reveal insights on separable mechanisms of NAG, SAM and MSAM regarding training optimization and generalization. Code is available at https://github.com/MarlonBecker/MSAM.
Abstract:Nonlinear behavior in the hopping transport of interacting charges enables reconfigurable logic in disordered dopant network devices, where voltages applied at control electrodes tune the relation between voltages applied at input electrodes and the current measured at an output electrode. From kinetic Monte Carlo simulations we analyze the critical nonlinear aspects of variable-range hopping transport for realizing Boolean logic gates in these devices on three levels. First, we quantify the occurrence of individual gates for random choices of control voltages. We find that linearly inseparable gates such as the XOR gate are less likely to occur than linearly separable gates such as the AND gate, despite the fact that the number of different regions in the multidimensional control voltage space for which AND or XOR gates occur is comparable. Second, we use principal component analysis to characterize the distribution of the output current vectors for the (00,10,01,11) logic input combinations in terms of eigenvectors and eigenvalues of the output covariance matrix. This allows a simple and direct comparison of the behavior of different simulated devices and a comparison to experimental devices. Third, we quantify the nonlinearity in the distribution of the output current vectors necessary for realizing Boolean functionality by introducing three nonlinearity indicators. The analysis provides a physical interpretation of the effects of changing the hopping distance and temperature and is used in a comparison with data generated by a deep neural network trained on a physical device.