Abstract:A multi-modal machine learning system uses multiple unique data sources and types to improve its performance. This article proposes a system that combines results from several types of models, all of which are trained on different data signals. As an example to illustrate the efficacy of the system, an experiment is described in which multiple types of data are collected from rats suffering from seizures. This data includes electrocorticography readings, piezoelectric motion sensor data, and video recordings. Separate models are trained on each type of data, with the goal of classifying each time frame as either containing a seizure or not. After each model has generated its classification predictions, these results are combined. While each data signal works adequately on its own for prediction purposes, the significant imbalance in class labels leads to increased numbers of false positives, which can be filtered and removed by utilizing all data sources. This paper will demonstrate that, after postprocessing and combination techniques, classification accuracy is improved with this multi-modal system when compared to the performance of each individual data source.
Abstract:This paper introduces an approach that combines the language reasoning capabilities of large language models (LLMs) with the benefits of local training to tackle complex, domain-specific tasks. Specifically, the authors demonstrate their approach by extracting structured condition codes from pathology reports. The proposed approach utilizes local LLMs, which can be fine-tuned to respond to specific generative instructions and provide structured outputs. The authors collected a dataset of over 150k uncurated surgical pathology reports, containing gross descriptions, final diagnoses, and condition codes. They trained different model architectures, including LLaMA, BERT and LongFormer and evaluated their performance. The results show that the LLaMA-based models significantly outperform BERT-style models across all evaluated metrics, even with extremely reduced precision. The LLaMA models performed especially well with large datasets, demonstrating their ability to handle complex, multi-label tasks. Overall, this work presents an effective approach for utilizing LLMs to perform domain-specific tasks using accessible hardware, with potential applications in the medical domain, where complex data extraction and classification are required.