The paper introduces LEMR, a framework that reduces annotation costs for model selection tasks. Our approach leverages ensemble methods to generate pseudo-labels, employs uncertainty sampling for target acquisition, and utilizes a Z-score mechanism for iterative committee reelection to refine model ranks. We present a systematic study across various selection metrics, demonstrating that LEMR achieves comparable results to fully labeled datasets with a fraction of the labeling budget. Our findings indicate that LEMR not only economizes the labeling effort in weak supervision and semi-supervised learning settings but also effectively guides prompt selection for large language models. With extensive experiments across 23 tasks, we reveal that our framework can dramatically decrease the labeling cost without compromising the accuracy of model selection, thereby offering a cost-effective alternative to traditional practices.