Abstract:Synthetic data generation holds considerable promise, offering avenues to enhance privacy, fairness, and data accessibility. Despite the availability of various methods for generating synthetic tabular data, challenges persist, particularly in specialized applications such as survival analysis. One significant obstacle in survival data generation is censoring, which manifests as not knowing the precise timing of observed (target) events for certain instances. Existing methods face difficulties in accurately reproducing the real distribution of event times for both observed (uncensored) events and censored events, i.e., the generated event-time distributions do not accurately match the underlying distributions of the real data. So motivated, we propose a simple paradigm to produce synthetic survival data by generating covariates conditioned on event times (and censoring indicators), thus allowing one to reuse existing conditional generative models for tabular data without significant computational overhead, and without making assumptions about the (usually unknown) generation mechanism underlying censoring. We evaluate this method via extensive experiments on real-world datasets. Our methodology outperforms multiple competitive baselines at generating survival data, while improving the performance of downstream survival models trained on it and tested on real data.
Abstract:Rising urban populations have led to a surge in vehicle use and made traffic monitoring and management indispensable. Acoustic traffic monitoring (ATM) offers a cost-effective and efficient alternative to more computationally expensive methods of monitoring traffic such as those involving computer vision technologies. In this paper, we present MVD and MVDA: two open datasets for the development of acoustic traffic monitoring and vehicle-type classification algorithms, which contain audio recordings of moving vehicles. The dataset contain four classes- Trucks, Cars, Motorbikes, and a No-vehicle class. Additionally, we propose a novel and efficient way to accurately classify these acoustic signals using cepstrum and spectrum based local and global audio features, and a multi-input neural network. Experimental results show that our methodology improves upon the established baselines of previous works and achieves an accuracy of 91.98% and 96.66% on MVD and MVDA Datasets, respectively. Finally, the proposed model was deployed through an Android application to make it accessible for testing and demonstrate its efficacy.
Abstract:The detection and classification of vehicles on the road is a crucial task for traffic monitoring. Usually, Computer Vision (CV) algorithms dominate the task of vehicle classification on the road, but CV methodologies might suffer in poor lighting conditions and require greater amounts of computational power. Additionally, there is a privacy concern with installing cameras in sensitive and secure areas. In contrast, acoustic traffic monitoring is cost-effective, and can provide greater accuracy, particularly in low lighting conditions and in places where cameras cannot be installed. In this paper, we consider the task of acoustic vehicle sub-type classification, where we classify acoustic signals into 4 classes: car, truck, bike, and no vehicle. We experimented with Mel spectrograms, MFCC and GFCC as features and performed data pre-processing to train a simple, well optimized CNN that performs well at the task. When used with MFCC as features and careful data pre-processing, our proposed methodology improves upon the established state-of-the-art baseline on the IDMT Traffic dataset with an accuracy of 98.95%.