We present a variational inference (VI) framework that unifies and leverages sequential Monte-Carlo (particle filtering) with \emph{approximate} rejection sampling to construct a flexible family of variational distributions. Furthermore, we augment this approach with a resampling step via Bernoulli race, a generalization of a Bernoulli factory, to obtain a low-variance estimator of the marginal likelihood. Our framework, Variational Rejection Particle Filtering (VRPF), leads to novel variational bounds on the marginal likelihood, which can be optimized efficiently with respect to the variational parameters and generalizes several existing approaches in the VI literature. We also present theoretical properties of the variational bound and demonstrate experiments on various models of sequential data, such as the Gaussian state-space model and variational recurrent neural net (VRNN), on which VRPF outperforms various existing state-of-the-art VI methods.