Previous research has shown that the fairness and the legitimacy of a moral decision-maker are important for people's acceptance of and compliance with the decision-maker. As technology rapidly advances, there have been increasing hopes and concerns about building artificially intelligent entities that are designed to intervene against norm violations. However, it is unclear how people would perceive artificial moral regulators that impose punishment on human wrongdoers. Grounded in theories of psychology and law, we predict that the perceived fairness of punishment imposed by a robot would increase the legitimacy of the robot functioning as a moral regulator, which would in turn, increase people's willingness to accept and comply with the robot's decisions. We close with a conceptual framework for building a robot moral regulator that successfully can regulate norm violations.