Abstract:One of the most widely used tasks to evaluate Large Language Models (LLMs) is Multiple-Choice Question Answering (MCQA). While open-ended question answering tasks are more challenging to evaluate, MCQA tasks are, in principle, easier to assess, as the model's answer is thought to be simple to extract and is directly compared to a set of predefined choices. However, recent studies have started to question the reliability of MCQA evaluation, showing that multiple factors can significantly impact the reported performance of LLMs, especially when the model generates free-form text before selecting one of the answer choices. In this work, we shed light on the inconsistencies of MCQA evaluation strategies, which can lead to inaccurate and misleading model comparisons. We systematically analyze whether existing answer extraction methods are aligned with human judgment, and how they are influenced by answer constraints in the prompt across different domains. Our experiments demonstrate that traditional evaluation strategies often underestimate LLM capabilities, while LLM-based answer extractors are prone to systematic errors. Moreover, we reveal a fundamental trade-off between including format constraints in the prompt to simplify answer extraction and allowing models to generate free-form text to improve reasoning. Our findings call for standardized evaluation methodologies and highlight the need for more reliable and consistent MCQA evaluation practices.
Abstract:This research introduces an innovative method for the early screening of cardiorespiratory diseases based on an acquisition protocol, which leverages commodity smartphone's Inertial Measurement Units (IMUs) and deep learning techniques. We collected, in a clinical setting, a dataset featuring recordings of breathing kinematics obtained by accelerometer and gyroscope readings from five distinct body regions. We propose an end-to-end deep learning pipeline for early cardiorespiratory disease screening, incorporating a preprocessing step segmenting the data into individual breathing cycles, and a recurrent bidirectional module capturing features from diverse body regions. We employed Leave-one-out-cross-validation with Bayesian optimization for hyperparameter tuning and model selection. The experimental results consistently demonstrated the superior performance of a bidirectional Long-Short Term Memory (Bi-LSTM) as a feature encoder architecture, yielding an average sensitivity of $0.81 \pm 0.02$, specificity of $0.82 \pm 0.05$, F1 score of $0.81 \pm 0.02$, and accuracy of $80.2\% \pm 3.9$ across diverse seed variations. We also assessed generalization capabilities on a skewed distribution, comprising exclusively healthy patients not used in training, revealing a true negative rate of $74.8 \% \pm 4.5$. The sustained accuracy of predictions over time during breathing cycles within a single patient underscores the efficacy of the preprocessing strategy, highlighting the model's ability to discern significant patterns throughout distinct phases of the respiratory cycle. This investigation underscores the potential usefulness of widely available smartphones as devices for timely cardiorespiratory disease screening in the general population, in at-home settings, offering crucial assistance to public health efforts (especially during a pandemic outbreaks, such as the recent COVID-19).