Abstract:Nonlinear hyperspectral unmixing has recently received considerable attention, as linear mixture models do not lead to an acceptable resolution in some problems. In fact, most nonlinear unmixing methods are designed by assuming specific assumptions on the nonlinearity model which subsequently limits the unmixing performance. In this paper, we propose an unsupervised nonlinear unmixing approach based on deep learning by incorporating a general nonlinear model with no special assumptions. This model consists of two branches. In the first branch, endmembers are learned by reconstructing the rows of hyperspectral images using some hidden layers, and in the second branch, abundance values are learned based on the columns of respective images. Then, using multi-task learning, we introduce an auxiliary task to enforce the two branches to work together. This technique can be considered as a regularizer mitigating overfitting, which improves the performance of the total network. Extensive experiments on synthetic and real data verify the effectiveness of the proposed method compared to some state-of-the-art hyperspectral unmixing methods.
Abstract:Gem5, an open-source, flexible, and cost-effective simulator, is widely recognized and utilized in both academic and industry fields for hardware simulation. However, the typically time-consuming nature of simulating programs on Gem5 underscores the need for a predictive model that can estimate simulation time. As of now, no such dataset or model exists. In response to this gap, this paper makes a novel contribution by introducing a unique dataset specifically created for this purpose. We also conducted analysis of the effects of different instruction types on the simulation time in Gem5. After this, we employ three distinct models leveraging CodeBERT to execute the prediction task based on the developed dataset. Our superior regression model achieves a Mean Absolute Error (MAE) of 0.546, while our top-performing classification model records an Accuracy of 0.696. Our models establish a foundation for future investigations on this topic, serving as benchmarks against which subsequent models can be compared. We hope that our contribution can simulate further research in this field. The dataset we used is available at https://github.com/XueyangLiOSU/Gem5Pred.
Abstract:Matrix completion has received vast amount of attention and research due to its wide applications in various study fields. Existing methods of matrix completion consider only nonlinear (or linear) relations among entries in a data matrix and ignore linear (or nonlinear) relationships latent. This paper introduces a new latent variables model for data matrix which is a combination of linear and nonlinear models and designs a novel deep-neural-network-based matrix completion algorithm to address both linear and nonlinear relations among entries of data matrix. The proposed method consists of two branches. The first branch learns the latent representations of columns and reconstructs the columns of the partially observed matrix through a series of hidden neural network layers. The second branch does the same for the rows. In addition, based on multi-task learning principles, we enforce these two branches work together and introduce a new regularization technique to reduce over-fitting. More specifically, the missing entries of data are recovered as a main task and manifold learning is performed as an auxiliary task. The auxiliary task constrains the weights of the network so it can be considered as a regularizer, improving the main task and reducing over-fitting. Experimental results obtained on the synthetic data and several real-world data verify the effectiveness of the proposed method compared with state-of-the-art matrix completion methods.