This paper investigates federated learning in a wireless communication system, where random device selection is employed with non-independent and identically distributed (non-IID) data. The analysis indicates that while training deep learning networks using federated stochastic gradient descent (FedSGD) on non-IID datasets, device selection can generate gradient errors that accumulate, leading to potential weight divergence. To mitigate training divergence, we design an age-weighted FedSGD to scale local gradients according to the previous state of devices. To further improve learning performance by increasing device participation under the maximum time consumption constraint, we formulate an energy consumption minimization problem by including resource allocation and sub-channel assignment. By transforming the resource allocation problem into convex and utilizing KKT conditions, we derived the optimal resource allocation solution. Moreover, this paper develops a matching based algorithm to generate the enhanced sub-channel assignment. Simulation results indicate that i) age-weighted FedSGD is able to outperform conventional FedSGD in terms of convergence rate and achievable accuracy, and ii) the proposed resource allocation and sub-channel assignment strategies can significantly reduce energy consumption and improve learning performance by increasing the number of selected devices.