Deep Probabilistic Programming (DPP) allows powerful models based on recursive computation to be learned using efficient deep-learning optimization techniques. Additionally, DPP offers a unified perspective, where inference and learning algorithms are treated on a par with models as stochastic programs. Here, we offer a framework for representing and learning flexible PAC-Bayes bounds as stochastic programs using DPP-based methods. In particular, we show that DPP techniques may be leveraged to derive generalization bounds that draw on the compositionality of DPP representations. In turn, the bounds we introduce offer principled training objectives for higher-order probabilistic programs. We offer a definition of a higher-order generalization bound, which naturally encompasses single- and multi-task generalization perspectives (including transfer- and meta-learning) and a novel class of bound based on a learned measure of model complexity. Further, we show how modified forms of all higher-order bounds can be efficiently optimized as objectives for DPP training, using variational techniques. We test our framework using single- and multi-task generalization settings on synthetic and biological data, showing improved performance and generalization prediction using flexible DPP model representations and learned complexity measures.