Abstract:This paper explores the development of the Six Stages of Information Search Model and its enhancement through the application of the Large Language Model (LLM) powered Information Search Processes (ISP) in healthcare. The Six Stages Model, a foundational framework in information science, outlines the sequential phases individuals undergo during information seeking: initiation, selection, exploration, formulation, collection, and presentation. Integrating LLM technology into this model significantly optimizes each stage, particularly in healthcare. LLMs enhance query interpretation, streamline information retrieval from complex medical databases, and provide contextually relevant responses, thereby improving the efficiency and accuracy of medical information searches. This fusion not only aids healthcare professionals in accessing critical data swiftly but also empowers patients with reliable and personalized health information, fostering a more informed and effective healthcare environment.
Abstract:Artificial Intelligence (AI) aims to elevate healthcare to a pinnacle by aiding clinical decision support. Overcoming the challenges related to the design of ethical AI will enable clinicians, physicians, healthcare professionals, and other stakeholders to use and trust AI in healthcare settings. This study attempts to identify the major ethical principles influencing the utility performance of AI at different technological levels such as data access, algorithms, and systems through a thematic analysis. We observed that justice, privacy, bias, lack of regulations, risks, and interpretability are the most important principles to consider for ethical AI. This data-driven study has analyzed secondary survey data from the Pew Research Center (2020) of 36 AI experts to categorize the top ethical principles of AI design. To resolve the ethical issues identified by the meta-analysis and domain experts, we propose a new utilitarian ethics-based theoretical framework for designing ethical AI for the healthcare domain.
Abstract:Stroke is a significant cause of mortality and morbidity, necessitating early predictive strategies to minimize risks. Traditional methods for evaluating patients, such as Acute Physiology and Chronic Health Evaluation (APACHE II, IV) and Simplified Acute Physiology Score III (SAPS III), have limited accuracy and interpretability. This paper proposes a novel approach: an interpretable, attention-based transformer model for early stroke mortality prediction. This model seeks to address the limitations of previous predictive models, providing both interpretability (providing clear, understandable explanations of the model) and fidelity (giving a truthful explanation of the model's dynamics from input to output). Furthermore, the study explores and compares fidelity and interpretability scores using Shapley values and attention-based scores to improve model explainability. The research objectives include designing an interpretable attention-based transformer model, evaluating its performance compared to existing models, and providing feature importance derived from the model.