Abstract:The increasing focus on predicting renewable energy production aligns with advancements in deep learning (DL). The inherent variability of renewable sources and the complexity of prediction methods require robust approaches, such as DL models, in the renewable energy sector. DL models are preferred over traditional machine learning (ML) because they capture complex, nonlinear relationships in renewable energy datasets. This study examines key factors influencing DL technique accuracy, including sampling and hyperparameter optimization, by comparing various methods and training and test ratios within a DL framework. Seven machine learning methods, LSTM, Stacked LSTM, CNN, CNN-LSTM, DNN, Time-Distributed MLP (TD-MLP), and Autoencoder (AE), are evaluated using a dataset combining weather and photovoltaic power output data from 12 locations. Regularization techniques such as early stopping, neuron dropout, L1 and L2 regularization are applied to address overfitting. The results demonstrate that the combination of early stopping, dropout, and L1 regularization provides the best performance to reduce overfitting in the CNN and TD-MLP models with larger training set, while the combination of early stopping, dropout, and L2 regularization is the most effective to reduce the overfitting in CNN-LSTM and AE models with smaller training set.
Abstract:The recent developments of adiabatic quantum machine learning (AQML) methods and applications based on the quadratic unconstrained binary optimization (QUBO) model have received attention from academics and practitioners. Traditional machine learning methods such as support vector machines, balanced k-means clustering, linear regression, Decision Tree Splitting, Restricted Boltzmann Machines, and Deep Belief Networks can be transformed into a QUBO model. The training of adiabatic quantum machine learning models is the bottleneck for computation. Heuristics-based quantum annealing solvers such as Simulated Annealing and Multiple Start Tabu Search (MSTS) are implemented to speed up the training of AQML based on the QUBO model. The main purpose of this paper is to present a hybrid heuristic embedding an r-flip strategy to solve large-scale QUBO with an improved solution and shorter computing time compared to the state-of-the-art MSTS method. The results of the substantial computational experiments are reported to compare an r-flip strategy embedded hybrid heuristic and a multiple start tabu search algorithm on a set of benchmark instances and three large-scale QUBO instances. The r-flip strategy embedded algorithm provides very high-quality solutions within the CPU time limits of 60 and 600 seconds.