Automated software agents --- or bots --- have long been an important part of how Wikipedia's volunteer community of editors write, edit, update, monitor, and moderate content. In this paper, I discuss the complex social and technical environment in which Wikipedia's bots operate. This paper focuses on the establishment and role of English Wikipedia's bot policies and the Bot Approvals Group, a volunteer committee that reviews applications for new bots and helps resolve conflicts between Wikipedians about automation. In particular, I examine an early bot controversy over the first bot in Wikipedia to automatically enforce a social norm about how Wikipedian editors ought to interact in discussion spaces. As I show, bots enforce many rules in Wikipedia, but humans produce these bots and negotiate rules around their operation. Because of the openness of Wikipedia's processes around automation, we can vividly observe the often-invisible human work involved in such algorithmic systems --- in stark contrast to most other user-generated content platforms.