nano-Macro Reliability Laboratory, Centre for Nanotechnology, Indian Institute of Technology Roorkee, Roorkee, Uttrakhand, 247667, India
Abstract:Speech recognition is a key challenge in natural language processing, requiring low latency, efficient computation, and strong generalization for real-time applications. While software-based artificial neural networks (ANNs) excel at this task, they are computationally intensive and depend heavily on data pre-processing. Neuromorphic computing, with its low-latency and energy-efficient advantages, holds promise for audio classification. Memristive nanowire networks, combined with pre-processing techniques like Mel-Frequency Cepstrum Coefficient extraction, have been widely used for associative learning, but such pre-processing can be power-intensive, undermining latency benefits. This study pioneers the use of memristive and spatio-temporal properties of nanowire networks for audio signal classification without pre-processing. A nanowire network simulation is paired with three linear classifiers for 10-class MNIST audio classification and binary speaker generalization tests. The hybrid system achieves significant benefits: excellent data compression with only 3% of nanowire output utilized, a 10-fold reduction in computational latency, and up to 28.5% improved classification accuracy (using a logistic regression classifier). Precision and recall improve by 10% and 17% for multispeaker datasets, and by 24% and 17% for individual speaker datasets, compared to raw data classifiers. This work provides a foundational proof of concept for utilizing memristive nanowire networks (NWN) in edge-computing devices, showcasing their potential for efficient, real-time audio signal processing with reduced computational overhead and power consumption, and enabling the development of advanced neuromorphic computing solutions.
Abstract:This paper presents NT-Java-1.1B, an open-source specialized code language model built on StarCoderBase-1.1B, designed for coding tasks in Java programming. NT-Java-1.1B achieves state-of-the-art performance, surpassing its base model and majority of other models of similar size on MultiPL-E Java code benchmark. While there have been studies on extending large, generic pre-trained models to improve proficiency in specific programming languages like Python, similar investigations on small code models for other programming languages are lacking. Large code models require specialized hardware like GPUs for inference, highlighting the need for research into building small code models that can be deployed on developer desktops. This paper addresses this research gap by focusing on the development of a small Java code model, NT-Java-1.1B, and its quantized versions, which performs comparably to open models around 1.1B on MultiPL-E Java code benchmarks, making them ideal for desktop deployment. This paper establishes the foundation for specialized models across languages and sizes for a family of NT Models.