Crowdsourcing has been successfully employed in the past as an effective and cheap way to execute classification tasks and has therefore attracted the attention of the research community. However, we still lack a theoretical understanding of how to collect the labels from the crowd in an optimal way. In this paper we focus on the problem of worker allocation and compare two active learning policies proposed in the empirical literature with a uniform allocation of the available budget. To this end we make a thorough mathematical analysis of the problem and derive a new bound on the performance of the system. Furthermore we run extensive simulations in a more realistic scenario and show that our theoretical results hold in practice.