Abstract:The task of Raga classification in Indian Art Music (IAM) is constrained by the limited availability of labeled datasets, resulting in many Ragas being unrepresented during the training of machine learning models. Traditional Raga classification methods rely on supervised learning, and assume that for a test audio to be classified by a Raga classification model, it must have been represented in the training data, which limits their effectiveness in real-world scenarios where novel, unseen Ragas may appear. To address this limitation, we propose a method based on Novel Class Discovery (NCD) to detect and classify previously unseen Ragas. Our approach utilizes a feature extractor trained in a supervised manner to generate embeddings, which are then employed within a contrastive learning framework for self-supervised training, enabling the identification of previously unseen Raga classes. The results demonstrate that the proposed method can accurately detect audio samples corresponding to these novel Ragas, offering a robust solution for utilizing the vast amount of unlabeled music data available online. This approach reduces the need for manual labeling while expanding the repertoire of recognized Ragas, and other music data in Music Information Retrieval (MIR).
Abstract:The task of Raga Identification is a very popular research problem in Music Information Retrieval. Few studies that have explored this task employed various approaches, such as signal processing, Machine Learning (ML) methods, and more recently Deep Learning (DL) based methods. However, a key question remains unanswered in all of these works: do these ML/DL methods learn and interpret Ragas in a manner similar to human experts? Besides, a significant roadblock in this research is the unavailability of ample supply of rich, labeled datasets, which drives these ML/DL based methods. In this paper, we introduce "Prasarbharti Indian Music" version-1 (PIM-v1), a novel dataset comprising of 191 hours of meticulously labeled Hindustani Classical Music (HCM) recordings, which is the largest labeled dataset for HCM recordings to the best of our knowledge. Our approach involves conducting ablation studies to find the benchmark classification model for Automatic Raga Identification (ARI) using PIM-v1 dataset. We achieve a chunk-wise f1-score of 0.89 for a subset of 12 Raga classes. Subsequently, we employ model explainability techniques to evaluate the classifier's predictions, aiming to ascertain whether they align with human understanding of Ragas or are driven by arbitrary patterns. We validate the correctness of model's predictions by comparing the explanations given by two ExAI models with human expert annotations. Following this, we analyze explanations for individual test examples to understand the role of regions highlighted by explanations in correct or incorrect predictions made by the model.