Overoptimization failures in machine learning and AI can involve specification gaming, reward hacking, fragility to distributional shifts, and Goodhart's or Campbell's law. These failure modes are an important challenge in building safe AI systems, but multi-agent systems have additional related failure modes. The equivalent failure modes for multi-agent systems are more complex, more problematic, and less well understood. This paper explains why this is the case, then lays out some of the classes of such failure, such as accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking.