We review recent developments of machine learning algorithms pertinent to the inverse renormalization group, which was originally established as a generative numerical method by Ron-Swendsen-Brandt via the implementation of compatible Monte Carlo simulations. Inverse renormalization group methods enable the iterative generation of configurations for increasing lattice size without the critical slowing down effect. We discuss the construction of inverse renormalization group transformations with the use of convolutional neural networks and present applications in models of statistical mechanics, lattice field theory, and disordered systems. We highlight the case of the three-dimensional Edwards-Anderson spin glass, where the inverse renormalization group can be employed to construct configurations for lattice volumes that have not yet been accessed by dedicated supercomputers.