Physics-informed neural networks (PINNs) are increasingly employed to replace/augment traditional numerical methods in solving partial differential equations (PDEs). While having many attractive features, state-of-the-art PINNs surrogate a specific realization of a PDE system and hence are problem-specific. That is, each time the boundary conditions and domain shape change, the model needs to be re-trained. This limitation prohibits the application of PINNs in realistic or large-scale engineering problems especially since the costs and efforts associated with their training are considerable. This paper introduces a transferable framework for solving boundary value problems (BVPs) via deep neural networks which can be trained once and used forever for various domains of unseen sizes, shapes, and boundary conditions. First, we introduce \emph{genomic flow network} (GFNet), a neural network that can infer the solution of a BVP across arbitrary boundary conditions on a small square domain called \emph{genome}. Then, we propose \emph{mosaic flow} (MF) predictor, a novel iterative algorithm that assembles or stitches the GFNet's inferences to obtain the solution of BVPs on unseen, large domains while preserving the spatial regularity of the solution. We demonstrate that our framework can estimate the solution of Laplace and Navier-Stokes equations in domains of unseen shapes and boundary conditions that are, respectively, $1200$ and $12$ times larger than the domains where training is performed. Since our framework eliminates the need to re-train, it demonstrates up to 3 orders of magnitude speedups compared to the state-of-the-art.