We study conditional risk minimization (CRM), i.e. the problem of learning a hypothesis of minimal risk for prediction at the next step of sequentially arriving dependent data. Despite it being a fundamental problem, successful learning in the CRM sense has so far only been demonstrated using theoretical algorithms that cannot be used for real problems as they would require storing all incoming data. In this work, we introduce MACRO, a meta-algorithm for CRM that does not suffer from this shortcoming, but nevertheless offers learning guarantees. Instead of storing all data it maintains and iteratively updates a set of learning subroutines. With suitable approximations, MACRO applied to real data, yielding improved prediction performance compared to traditional non-conditional learning.