Real world planning problems are often too complex to be effectively tackled by a single unaided human. To alleviate this, some recent work has focused on developing a collaborative planning system to assist humans in complex domains, with bridging the gap between the system's problem representation and the real world being a key consideration. Transferring the speed and correctness formal planners provide to real-world planning problems is greatly complicated by the dynamic and online nature of such tasks. Formal specifications of task and environment dynamics frequently lack constraints on some behaviors or goal conditions relevant to the way a human operator prefers a plan to be carried out. While adding constraints to the representation with the objective of increasing its realism risks slowing down the planner, we posit that the same benefits can be realized without sacrificing speed by modeling this problem as an online preference learning task. As part of a broader cooperative planning system, we present a feedback-driven plan critic. This method makes use of reinforcement learning with human feedback in conjunction with a genetic algorithm to directly optimize a plan with respect to natural-language user preferences despite the non-differentiability of traditional planners. Directly optimizing the plan bridges the gap between research into more efficient planners and research into planning with language models by utilizing the convenience of natural language to guide the output of formal planners. We demonstrate the effectiveness of our plan critic at adhering to user preferences on a disaster recovery task, and observe improved performance compared to an llm-only neurosymbolic approach.