Abstract:Utilizing a spectral dictionary learned from a couple of similar-scene multi- and hyperspectral image, it is possible to reconstruct a desired hyperspectral image only with one single multispectral image. However, the differences between the similar scene and the desired hyperspectral image make it difficult to directly apply the spectral dictionary from the training domain to the task domain. To this end, a compensation matrix based dictionary transfer method for the similar-scene multispectral image spectral super-resolution is proposed in this paper, trying to reconstruct a more accurate high spatial resolution hyperspectral image. Specifically, a spectral dictionary transfer scheme is established by using a compensation matrix with similarity constraint, to transfer the spectral dictionary learned in the training domain to the spectral super-resolution domain. Subsequently, the sparse coefficient matrix is optimized under sparse and low-rank constraints. Experimental results on two AVIRIS datasets from different scenes indicate that, the proposed method outperforms other related SOTA methods.
Abstract:Natural gradient descent (NGD) is a powerful optimization technique for machine learning, but the computational complexity of the inverse Fisher information matrix limits its application in training deep neural networks. To overcome this challenge, we propose a novel optimization method for training deep neural networks called structured natural gradient descent (SNGD). Theoretically, we demonstrate that optimizing the original network using NGD is equivalent to using fast gradient descent (GD) to optimize the reconstructed network with a structural transformation of the parameter matrix. Thereby, we decompose the calculation of the global Fisher information matrix into the efficient computation of local Fisher matrices via constructing local Fisher layers in the reconstructed network to speed up the training. Experimental results on various deep networks and datasets demonstrate that SNGD achieves faster convergence speed than NGD while retaining comparable solutions. Furthermore, our method outperforms traditional GDs in terms of efficiency and effectiveness. Thus, our proposed method has the potential to significantly improve the scalability and efficiency of NGD in deep learning applications. Our source code is available at https://github.com/Chaochao-Lin/SNGD.