Edinburgh Napier University
Abstract:Trauma significantly impacts global health, accounting for over 5 million deaths annually, which is comparable to mortality rates from diseases such as tuberculosis, AIDS, and malaria. In Iran, the financial repercussions of road traffic accidents represent approximately 2% of the nation's Gross National Product each year. Bleeding is the leading cause of mortality in trauma patients within the first 24 hours following an injury, making rapid diagnosis and assessment of severity crucial. Trauma patients require comprehensive scans of all organs, generating a large volume of data. Evaluating CT images for the entire body is time-consuming and requires significant expertise, underscoring the need for efficient time management in diagnosis. Efficient diagnostic processes can significantly reduce treatment costs and decrease the likelihood of secondary complications. In this context, the development of a reliable Decision Support System (DSS) for trauma triage, particularly focused on the abdominal area, is vital. This paper presents a novel method for detecting liver bleeding and lacerations using CT scans, utilising the GAN Pix2Pix translation model. The effectiveness of the method is quantified by Dice score metrics, with the model achieving an accuracy of 97% for liver bleeding and 93% for liver laceration detection. These results represent a notable improvement over current state-of-the-art technologies. The system's design integrates seamlessly with existing medical imaging technologies, making it a practical addition to emergency medical services. This research underscores the potential of advanced image translation models like GAN Pix2Pix in improving the precision and speed of medical diagnostics in critical care scenarios.
Abstract:Implicit gender bias in Large Language Models (LLMs) is a well-documented problem, and implications of gender introduced into automatic translations can perpetuate real-world biases. However, some LLMs use heuristics or post-processing to mask such bias, making investigation difficult. Here, we examine bias in LLMss via back-translation, using the DeepL translation API to investigate the bias evinced when repeatedly translating a set of 56 Software Engineering tasks used in a previous study. Each statement starts with 'she', and is translated first into a 'genderless' intermediate language then back into English; we then examine pronoun-choice in the back-translated texts. We expand prior research in the following ways: (1) by comparing results across five intermediate languages, namely Finnish, Indonesian, Estonian, Turkish and Hungarian; (2) by proposing a novel metric for assessing the variation in gender implied in the repeated translations, avoiding the over-interpretation of individual pronouns, apparent in earlier work; (3) by investigating sentence features that drive bias; (4) and by comparing results from three time-lapsed datasets to establish the reproducibility of the approach. We found that some languages display similar patterns of pronoun use, falling into three loose groups, but that patterns vary between groups; this underlines the need to work with multiple languages. We also identify the main verb appearing in a sentence as a likely significant driver of implied gender in the translations. Moreover, we see a good level of replicability in the results, and establish that our variation metric proves robust despite an obvious change in the behaviour of the DeepL translation API during the course of the study. These results show that the back-translation method can provide further insights into bias in language models.
Abstract:Since radiology reports needed for clinical practice and research are written and stored in free-text narrations, extraction of relative information for further analysis is difficult. In these circumstances, natural language processing (NLP) techniques can facilitate automatic information extraction and transformation of free-text formats to structured data. In recent years, deep learning (DL)-based models have been adapted for NLP experiments with promising results. Despite the significant potential of DL models based on artificial neural networks (ANN) and convolutional neural networks (CNN), the models face some limitations to implement in clinical practice. Transformers, another new DL architecture, have been increasingly applied to improve the process. Therefore, in this study, we propose a transformer-based fine-grained named entity recognition (NER) architecture for clinical information extraction. We collected 88 abdominopelvic sonography reports in free-text formats and annotated them based on our developed information schema. The text-to-text transfer transformer model (T5) and Scifive, a pre-trained domain-specific adaptation of the T5 model, were applied for fine-tuning to extract entities and relations and transform the input into a structured format. Our transformer-based model in this study outperformed previously applied approaches such as ANN and CNN models based on ROUGE-1, ROUGE-2, ROUGE-L, and BLEU scores of 0.816, 0.668, 0.528, and 0.743, respectively, while providing an interpretable structured report.