Federated learning (FL) allows multiple clients to collaboratively learn a globally shared model through cycles of model aggregation and local model training without the need to share data. In this paper, we comprehensively study a new problem named aggregation error (AE), arising from the model aggregation stage on a server, which is mainly induced by the heterogeneity of the client data. Due to the large discrepancies between local models, the accompanying large AE generally results in a slow convergence and an expected reduction of accuracy for FL. In order to reduce AE, we propose a novel federated learning framework from a Bayesian perspective, in which a multivariate Gaussian product mechanism is employed to aggregate the local models. It is worth noting that the product of Gaussians is still a Gaussian. This property allows us to directly aggregate local expectations and covariances in a definitely convex form, thereby greatly reducing the AE. Accordingly, on the clients, we develop a new Federated Online Laplace Approximation (FOLA) method, which can estimate the parameters of the local posterior by repeatedly accumulating priors. Specifically, in every round, the global posterior distributed from the server can be treated as the priors, and thus the local posterior can also be effectively approximated by a Gaussian using FOLA. Experimental results on benchmarks reach state-of-the-arts performance and clearly demonstrate the advantages of the proposed method.