Abstract:The expanding role of Artificial Intelligence (AI) in diverse engineering domains highlights the challenges associated with deploying AI models in new operational environments, involving substantial investments in data collection and model training. Rapid application of AI necessitates evaluating the feasibility of utilizing pre-trained models in unobserved operational settings with minimal or no additional data. However, interpreting the opaque nature of AI's black-box models remains a persistent challenge. Addressing this issue, this paper proposes a science-based certification methodology to assess the viability of employing pre-trained data-driven models in untrained operational environments. The methodology advocates a profound integration of domain knowledge, leveraging theoretical and analytical models from physics and related disciplines, with data-driven AI models. This novel approach introduces tools to facilitate the development of secure engineering systems, providing decision-makers with confidence in the trustworthiness and safety of AI-based models across diverse environments characterized by limited training data and dynamic, uncertain conditions. The paper demonstrates the efficacy of this methodology in real-world safety-critical scenarios, particularly in the context of traffic state estimation. Through simulation results, the study illustrates how the proposed methodology efficiently quantifies physical inconsistencies exhibited by pre-trained AI models. By utilizing analytical models, the methodology offers a means to gauge the applicability of pre-trained AI models in new operational environments. This research contributes to advancing the understanding and deployment of AI models, offering a robust certification framework that enhances confidence in their reliability and safety across a spectrum of operational conditions.