Abstract:Reinforcement learning encounters challenges in various environments related to robustness and explainability. Traditional Q-learning algorithms cannot effectively make decisions and utilize the historical learning experience. To overcome these limitations, we propose Cognitive Belief-Driven Q-Learning (CBDQ), which integrates subjective belief modeling into the Q-learning framework, enhancing decision-making accuracy by endowing agents with human-like learning and reasoning capabilities. Drawing inspiration from cognitive science, our method maintains a subjective belief distribution over the expectation of actions, leveraging a cluster-based subjective belief model that enables agents to reason about the potential probability associated with each decision. CBDQ effectively mitigates overestimated phenomena and optimizes decision-making policies by integrating historical experiences with current contextual information, mimicking the dynamics of human decision-making. We evaluate the proposed method on discrete control benchmark tasks in various complicate environments. The results demonstrate that CBDQ exhibits stronger adaptability, robustness, and human-like characteristics in handling these environments, outperforming other baselines. We hope this work will give researchers a fresh perspective on understanding and explaining Q-learning.
Abstract:This research tackles the challenge of integrating heterogeneous data for specific behavior recognition within the domain of Pain Recognition, presenting a novel methodology that harmonizes statistical correlations with a human-centered approach. By leveraging a diverse range of deep learning architectures, we highlight the adaptability and efficacy of our approach in improving model performance across various complex scenarios. The novelty of our methodology is the strategic incorporation of statistical relevance weights and the segmentation of modalities from a human-centric perspective, enhancing model precision and providing a explainable analysis of multimodal data. This study surpasses traditional modality fusion techniques by underscoring the role of data diversity and customized modality segmentation in enhancing pain behavior analysis. Introducing a framework that matches each modality with an suited classifier, based on the statistical significance, signals a move towards customized and accurate multimodal fusion strategies. Our contributions extend beyond the field of Pain Recognition by delivering new insights into modality fusion and human-centered computing applications, contributing towards explainable AI and bolstering patient-centric healthcare interventions. Thus, we bridge a significant void in the effective and interpretable fusion of multimodal data, establishing a novel standard for forthcoming inquiries in pain behavior recognition and allied fields.