Graph Learning (GL) is at the core of inference and analysis of connections in data mining and machine learning (ML). By observing a dataset of graph signals, and considering specific assumptions, Graph Signal Processing (GSP) tools can provide practical constraints in the GL approach. One applicable constraint can infer a graph with desired frequency signatures, i.e., spectral templates. However, a severe computational burden is a challenging barrier, especially for inference from high-dimensional graph signals. To address this issue and in the case of the underlying graph having graph product structure, we propose learning product (high dimensional) graphs from product spectral templates with significantly reduced complexity rather than learning them directly from high-dimensional graph signals, which, to the best of our knowledge, has not been addressed in the related areas. In contrast to the rare current approaches, our approach can learn all types of product graphs (with more than two graphs) without knowing the type of graph products and has fewer parameters. Experimental results on both the synthetic and real-world data, i.e., brain signal analysis and multi-view object images, illustrate explainable and meaningful factor graphs supported by expert-related research, as well as outperforming the rare current restricted approaches.