Collective action in Machine Learning is the study of the control that a coordinated group can have over machine learning algorithms. While previous research has concentrated on assessing the impact of collectives against Bayes optimal classifiers, this perspective is limited, given that in reality, classifiers seldom achieve Bayes optimality and are influenced by the choice of learning algorithms along with their inherent inductive biases. In this work, we initiate the study of how the choice of the learning algorithm plays a role in the success of a collective in practical settings. Specifically, we focus on distributionally robust algorithms (DRO), popular for improving a worst group error, and on the popular stochastic gradient descent (SGD), due to its inductive bias for "simpler" functions. Our empirical results, supported by a theoretical foundation, show that the effective size and success of the collective are highly dependent on properties of the learning algorithm. This highlights the necessity of taking the learning algorithm into account when studying the impact of collective action in Machine learning.