Lethal Autonomous Weapons promise to revolutionize warfare -- and raise a multitude of ethical and legal questions. It has thus been suggested to program values and principles of conduct (such as the Geneva Conventions) into the machines' control, thereby rendering them both physically and morally superior to human combatants. We employ mathematical logic and theoretical computer science to explore fundamental limitations to the moral behaviour of intelligent machines in a series of "Gedankenexperiments": Refining and sharpening variants of the Trolley Problem leads us to construct an (admittedly artificial but) fully deterministic situation where a robot is presented with two choices: one morally clearly preferable over the other -- yet, based on the undecidability of the Halting problem, it provably cannot decide algorithmically which one. Our considerations have surprising implications to the question of responsibility and liability for an autonomous system's actions and lead to specific technical recommendations.