Abstract:In this paper, we introduce a formulation of Physics-Informed Neural Networks (PINNs), based on learning the form of the Fourier decomposition, and a training methodology based on a spread of randomly chosen boundary conditions. By training in this way we produce a PINN that generalises; after training it can be used to correctly predict the solution for an arbitrary set of boundary conditions and interpolate this solution between the samples that spanned the training domain. We demonstrate for a toy system of two coupled oscillators that this gives the PINN formulation genuine predictive capability owing to an effective reduction of the training to evaluation times ratio due to this decoupling of the solution from specific boundary conditions.
Abstract:Variational autoencdoers (VAE) are a popular approach to generative modelling. However, exploiting the capabilities of VAEs in practice can be difficult. Recent work on regularised and entropic autoencoders have begun to explore the potential, for generative modelling, of removing the variational approach and returning to the classic deterministic autoencoder (DAE) with additional novel regularisation methods. In this paper we empirically explore the capability of DAEs for image generation without additional novel methods and the effect of the implicit regularisation and smoothness of large networks. We find that DAEs can be used successfully for image generation without additional loss terms, and that many of the useful properties of VAEs can arise implicitly from sufficiently large convolutional encoders and decoders when trained on CIFAR-10 and CelebA.