In recent years, various machine learning (ML) solutions have been developed to solve resource management, interference management, autonomy, and decision-making problems in non-wireless and wireless networks. Standard ML approaches require collecting data at a central server for training, which cannot preserve the data privacy of devices. To address this issue, federated learning (FL) is an effective method to allow edge devices to collaboratively train ML models without sharing local datasets for data privacy. Typically, FL focuses on learning a global model for a given task and all devices and hence cannot adapt the model to devices with different data distributions. In such cases, meta learning can be employed to adapt learning models to different data distributions using a few data samples. In this tutorial, we conduct a comprehensive review on FL, meta learning, and federated meta learning (FedMeta). Compared to other tutorial papers, our objective is to leverage how FL/meta-learning/FedMeta can be designed, optimized, and evolved over non-wireless and wireless networks. Furthermore, we analyze not only the relationship among these learning algorithms but also their advantages and disadvantages in real-world applications.