Abstract:Quantification is the research field that studies the task of counting how many data points belong to each class in an unlabeled sample. Traditionally, researchers in this field assume the availability of training data containing labeled observations for all classes to induce quantification models. Although quantification methods usually estimate counts for every class, we are often interested in those regarding only a target class. In this context, we have proposed a novel setting, known as One-class Quantification (OCQ), where reliable training data is only available for the target class. On the other hand, Positive and Unlabeled Learning (PUL), which is another branch of Machine Learning, has offered solutions that can be applied to OCQ, despite quantification not being the focal point of PUL. In this article, we close the gap between PUL and OCQ and bring both areas together under a unified view. We compare our methods, Passive Aggressive Threshold (PAT) and One Distribution Inside (ODIn), against PUL methods and show that PAT generally is the fastest and most accurate algorithm. Contrary to PUL methods, PAT and ODIn also can induce quantification models that can be replied to quantify different samples of data. We additionally introduce Exhaustive TIcE (ExTIcE), an improved version of the PUL algorithm Tree Induction for c Estimation (TIcE), and show that it quantifies more accurately than PAT and the other algorithms in scenarios where a considerable number of negative observations are identical to positive observations.