Federated learning is an emerging machine learning paradigm that enables devices to train collaboratively without exchanging their local data. The clients participating in the training process are a random subset selected from the pool of clients. The above procedure is called client selection which is an important area in federated learning as it highly impacts the convergence rate, learning efficiency, and generalization. In this work, we introduce client filtering in federated learning (FilFL), a new approach to optimize client selection and training. FilFL first filters the active clients by choosing a subset of them that maximizes a specific objective function; then, a client selection method is applied to that subset. We provide a thorough analysis of its convergence in a heterogeneous setting. Empirical results demonstrate several benefits to our approach, including improved learning efficiency, accelerated convergence, $2$-$3\times$ faster, and higher test accuracy, around $2$-$10$ percentage points higher.