Abstract:To address the high levels of uncertainty associated with photovoltaic energy, an increasing number of studies focusing on short-term solar forecasting have been published. Most of these studies use deep learning-based models to directly forecast a solar irradiance or photovoltaic power value given an input of sky image sequences. Recently, however, advances in generative modeling have led to approaches that divide the forecasting problem into two sub-problems: 1) future event prediction, i.e. generating future sky images; and 2) solar irradiance or photovoltaic power nowcasting, i.e. predicting the concurrent value from a single image. One such approach is the SkyGPT model, where they show that the potential for improvement is much larger for the nowcasting model than for the generative model. Thus, in this paper, we focus on the solar irradiance nowcasting problem and conduct an extensive benchmark of deep learning architectures across the widely-used Folsom, SIRTA and NREL datasets. Moreover, we perform ablation experiments on different training configurations and data processing techniques, including the choice of the target variable used for training and adjustments of the timestamp alignment between images and irradiance measurements. In particular, we draw attention to a potential error associated with the sky image timestamps in the Folsom dataset and a possible fix is discussed. All our results are reported in terms of both the root mean squared error and the mean absolute error and, by leveraging the three datasets, we demonstrate that our findings are consistent across different solar stations.