State-of-the-art machine learning models require access to significant amount of annotated data in order to achieve the desired level of performance. While unlabelled data can be largely available and even abundant, annotation process can be quite expensive and limiting. Under the assumption that some samples are more important for a given task than others, active learning targets the problem of identifying the most informative samples that one should acquire annotations for. Instead of the conventional reliance on model uncertainty as a proxy to leverage new unknown labels, in this work we propose a simple sample selection criterion that moves beyond uncertainty. By first accepting the model prediction and then judging its effect on the generalization error, we can better identify wrongly predicted samples. We further present an approximation to our criterion that is very efficient and provides a similarity based interpretation. In addition to evaluating our method on the standard benchmarks of active learning, we consider the challenging yet realistic scenario of imbalanced data where categories are not equally represented. We show state-of-the-art results and better rates at identifying wrongly predicted samples. Our method is simple, model agnostic and relies on the current model status without the need for re-training from scratch.