Federated Learning (FL) allows several clients to cooperatively train machine learning models without disclosing the raw data. In practice, due to the system and statistical heterogeneity among devices, synchronous FL often encounters the straggler effect. In contrast, asynchronous FL can mitigate this problem, making it suitable for scenarios involving numerous participants. However, Non-IID data and stale models present significant challenges to asynchronous FL, as they would diminish the practicality of the global model and even lead to training failures. In this work, we propose a novel asynchronous FL framework called Federated Historical Learning (FedHist), which effectively addresses the challenges posed by both Non-IID data and gradient staleness. FedHist enhances the stability of local gradients by performing weighted fusion with historical global gradients cached on the server. Relying on hindsight, it assigns aggregation weights to each participant in a multi-dimensional manner during each communication round. To further enhance the efficiency and stability of the training process, we introduce an intelligent $\ell_2$-norm amplification scheme, which dynamically regulates the learning progress based on the $\ell_2$-norms of the submitted gradients. Extensive experiments demonstrate that FedHist outperforms state-of-the-art methods in terms of convergence performance and test accuracy.