Abstract:Bayesian neural networks use random variables to describe the neural networks rather than deterministic neural networks and are mostly trained by variational inference which updates the mean and variance at the same time. Here, we formulate the Bayesian neural networks as a minimax game problem. We do the experiments on the MNIST data set and the primary result is comparable to the existing closed-loop transcription neural network. Finally, we reveal the connections between Bayesian neural networks and closed-loop transcription neural networks, and show our framework is rather practical, and provide another view of Bayesian neural networks.