Mirror-prox (MP) is a well-known algorithm to solve variational inequality (VI) problems. VI with a monotone operator covers a large group of settings such as convex minimization, min-max or saddle point problems. To get a convergent algorithm, the step-size of the classic MP algorithm relies heavily on the problem dependent knowledge of the operator such as its smoothness parameter which is hard to estimate. Recently, a universal variant of MP for smooth/bounded operators has been introduced that depends only on the norm of updates in MP. In this work, we relax the dependence to evaluating the norm of updates to Bregman divergence between updates. This relaxation allows us to extends the analysis of universal MP to the settings where the operator is not smooth or bounded. Furthermore, we analyse the VI problem with a stochastic monotone operator in different settings and obtain an optimal rate up to a logarithmic factor.