Abstract:Music source separation represents the task of extracting all the instruments from a given song. Recent breakthroughs on this challenge have gravitated around a single dataset, MUSDB, that is limited to four instrument classes only. New datasets are required to extend to other instruments and increase the performances. However larger datasets are costly and time-consuming in terms of collecting data and training deep networks. In this work, we propose a fast method for evaluating the separability of instruments in any dataset or song, and for any instrument without the need to train and tune a deep neural network. This separability measure helps selecting appropriate samples for the efficient training of neural networks. Our approach, based on the oracle principle with an ideal ratio mask, is a good proxy to estimate the separation performances of state-of-the-art deep learning approaches based on time-frequency masking such as TasNet or Open-Unmix. The proposed fast accuracy estimation method can significantly speed up the music source separation system's development process.