Abstract:Manually creating Planning Domain Definition Language (PDDL) descriptions is difficult, error-prone, and requires extensive expert knowledge. However, this knowledge is already embedded in engineering models and can be reused. Therefore, this contribution presents a comprehensive workflow for the automated generation of PDDL descriptions from integrated system and product models. The proposed workflow leverages Model-Based Systems Engineering (MBSE) to organize and manage system and product information, translating it automatically into PDDL syntax for planning purposes. By connecting system and product models with planning aspects, it ensures that changes in these models are quickly reflected in updated PDDL descriptions, facilitating efficient and adaptable planning processes. The workflow is validated within a use case from aircraft assembly.
Abstract:The following contribution introduces a concept that employs Large Language Models (LLMs) and a chatbot interface to enhance SPARQL query generation for ontologies, thereby facilitating intuitive access to formalized knowledge. Utilizing natural language inputs, the system converts user inquiries into accurate SPARQL queries that strictly query the factual content of the ontology, effectively preventing misinformation or fabrication by the LLM. To enhance the quality and precision of outcomes, additional textual information from established domain-specific standards is integrated into the ontology for precise descriptions of its concepts and relationships. An experimental study assesses the accuracy of generated SPARQL queries, revealing significant benefits of using LLMs for querying ontologies and highlighting areas for future research.
Abstract:In the following contribution, a method is introduced that integrates domain expert-centric ontology design with the Cross-Industry Standard Process for Data Mining (CRISP-DM). This approach aims to efficiently build an application-specific ontology tailored to the corrective maintenance of Cyber-Physical Systems (CPS). The proposed method is divided into three phases. In phase one, ontology requirements are systematically specified, defining the relevant knowledge scope. Accordingly, CPS life cycle data is contextualized in phase two using domain-specific ontological artifacts. This formalized domain knowledge is then utilized in the CRISP-DM to efficiently extract new insights from the data. Finally, the newly developed data-driven model is employed to populate and expand the ontology. Thus, information extracted from this model is semantically annotated and aligned with the existing ontology in phase three. The applicability of this method has been evaluated in an anomaly detection case study for a modular process plant.
Abstract:The integration of Artificial Intelligence (AI) into automation systems has the potential to enhance efficiency and to address currently unsolved existing technical challenges. However, the industry-wide adoption of AI is hindered by the lack of standardized documentation for the complex compositions of automation systems, AI software, production hardware, and their interdependencies. This paper proposes a formal model using standards and ontologies to provide clear and structured documentation of AI applications in automation systems. The proposed information model for artificial intelligence in automation systems (AIAS) utilizes ontology design patterns to map and link various aspects of automation systems and AI software. Validated through a practical example, the model demonstrates its effectiveness in improving documentation practices and aiding the sustainable implementation of AI in industrial settings.
Abstract:Mobile robots, becoming increasingly autonomous, are capable of operating in diverse and unknown environments. This flexibility allows them to fulfill goals independently and adapting their actions dynamically without rigidly predefined control codes. However, their autonomous behavior complicates guaranteeing safety and reliability due to the limited influence of a human operator to accurately supervise and verify each robot's actions. To ensure autonomous mobile robot's safety and reliability, which are aspects of dependability, methods are needed both in the planning and execution of missions for autonomous mobile robots. In this article, a twofold approach is presented that ensures fault removal in the context of mission planning and fault prevention during mission execution for autonomous mobile robots. First, the approach consists of a concept based on formal verification applied during the planning phase of missions. Second, the approach consists of a rule-based concept applied during mission execution. A use case applying the approach is presented, discussing how the two concepts complement each other and what contribution they make to certain aspects of dependability. Unbemannte Fahrzeuge sind durch zunehmende Autonomie in der Lage in unterschiedlichen unbekannten Umgebungen zu operieren. Diese Flexibilit\"at erm\"oglicht es ihnen Ziele eigenst\"andig zu erf\"ullen und ihre Handlungen dynamisch anzupassen ohne starr vorgegebenen Steuerungscode. Allerdings erschwert ihr autonomes Verhalten die Gew\"ahrleistung von Sicherheit und Zuverl\"assigkeit, bzw. der Verl\"asslichkeit, da der Einfluss eines menschlichen Bedieners zur genauen \"Uberwachung und Verifizierung der Aktionen jedes Roboters begrenzt ist. Daher werden Methoden sowohl in der Planung als auch in der Ausf\"uhrung von Missionen f\"ur unbemannte Fahrzeuge ben\"otigt, um die Sicherheit und Zuverl\"assigkeit dieser Fahrzeuge zu gew\"ahrleisten. In diesem Artikel wird ein zweistufiger Ansatz vorgestellt, der eine Fehlerbeseitigung w\"ahrend der Missionsplanung und eine Fehlerpr\"avention w\"ahrend der Missionsausf\"uhrung f\"ur unbemannte Fahrzeuge sicherstellt. Die Fehlerbeseitigung basiert auf formaler Verifikation, die w\"ahrend der Planungsphase der Missionen angewendet wird. Die Fehlerpr\"avention basiert auf einem regelbasierten Konzept, das w\"ahrend der Missionsausf\"uhrung angewendet wird. Der Ansatz wird an einem Beispiel angewendet und es wird diskutiert, wie die beiden Konzepte sich erg\"anzen und welchen Beitrag sie zu verschiedenen Aspekten der Verl\"asslichkeit leisten.
Abstract:To achieve a flexible and adaptable system, capability ontologies are increasingly leveraged to describe functions in a machine-interpretable way. However, modeling such complex ontological descriptions is still a manual and error-prone task that requires a significant amount of effort and ontology expertise. This contribution presents an innovative method to automate capability ontology modeling using Large Language Models (LLMs), which have proven to be well suited for such tasks. Our approach requires only a natural language description of a capability, which is then automatically inserted into a predefined prompt using a few-shot prompting technique. After prompting an LLM, the resulting capability ontology is automatically verified through various steps in a loop with the LLM to check the overall correctness of the capability ontology. First, a syntax check is performed, then a check for contradictions, and finally a check for hallucinations and missing ontology elements. Our method greatly reduces manual effort, as only the initial natural language description and a final human review and possible correction are necessary, thereby streamlining the capability ontology generation process.
Abstract:Capability ontologies are increasingly used to model functionalities of systems or machines. The creation of such ontological models with all properties and constraints of capabilities is very complex and can only be done by ontology experts. However, Large Language Models (LLMs) have shown that they can generate machine-interpretable models from natural language text input and thus support engineers / ontology experts. Therefore, this paper investigates how LLMs can be used to create capability ontologies. We present a study with a series of experiments in which capabilities with varying complexities are generated using different prompting techniques and with different LLMs. Errors in the generated ontologies are recorded and compared. To analyze the quality of the generated ontologies, a semi-automated approach based on RDF syntax checking, OWL reasoning, and SHACL constraints is used. The results of this study are very promising because even for complex capabilities, the generated ontologies are almost free of errors.
Abstract:In response to the global shift towards renewable energy resources, the production of green hydrogen through electrolysis is emerging as a promising solution. Modular electrolysis plants, designed for flexibility and scalability, offer a dynamic response to the increasing demand for hydrogen while accommodating the fluctuations inherent in renewable energy sources. However, optimizing their operation is challenging, especially when a large number of electrolysis modules needs to be coordinated, each with potentially different characteristics. To address these challenges, this paper presents a decentralized scheduling model to optimize the operation of modular electrolysis plants using the Alternating Direction Method of Multipliers. The model aims to balance hydrogen production with fluctuating demand, to minimize the marginal Levelized Cost of Hydrogen (mLCOH), and to ensure adaptability to operational disturbances. A case study validates the accuracy of the model in calculating mLCOH values under nominal load conditions and demonstrates its responsiveness to dynamic changes, such as electrolyzer module malfunctions and scale-up scenarios.
Abstract:To achieve a highly agile and flexible production, it is envisioned that industrial production systems gradually become more decentralized, interconnected, and intelligent. Within this vision, production assets collaborate with each other, exhibiting a high degree of autonomy. Furthermore, knowledge about individual production assets is readily available throughout their entire life-cycles. To realize this vision, adequate use of information technology is required. Two commonly applied software paradigms in this context are Software Agents (referred to as Agents) and Digital Twins (DTs). This work presents a systematic comparison of Agents and DTs in industrial applications. The goal of the study is to determine the differences, similarities, and potential synergies between the two paradigms. The comparison is based on the purposes for which Agents and DTs are applied, the properties and capabilities exhibited by these software paradigms, and how they can be allocated within the Reference Architecture Model Industry 4.0. The comparison reveals that Agents are commonly employed in the collaborative planning and execution of production processes, while DTs typically play a more passive role in monitoring production resources and processing information. Although these observations imply characteristic sets of capabilities and properties for both Agents and DTs, a clear and definitive distinction between the two paradigms cannot be made. Instead, the analysis indicates that production assets utilizing a combination of Agents and DTs would demonstrate high degrees of intelligence, autonomy, sociability, and fidelity. To achieve this, further standardization is required, particularly in the field of DTs.
Abstract:Individualized products and shorter product life cycles have driven companies to rethink traditional mass production. New concepts like Industry 4.0 foster the advent of decentralized production control and distribution of information. A promising technology for realizing such scenarios are Multi-agent systems. This contribution analyses the requirements for an agent-based decentralized and integrated scheduling approach. Part of the requirements is to develop a linearly scaling communication architecture, as the communication between the agents is a major driver of the scheduling execution time. The approach schedules production, transportation, buffering and shared resource operations such as tools in an integrated manner to account for interdependencies between them. Part of the logistics requirements reflect constraints for large workpieces such as buffer scarcity. The approach aims at providing a general solution that is also applicable to large system sizes that, for example, can be found in production networks with multiple companies. Further, it is applicable for different kinds of factory organization (flow shop, job shop etc.). The approach is explained using an example based on industrial requirements. Experiments have been conducted to evaluate the scheduling execution time. The results show the approach's linear scaling behavior. Also, analyses of the concurrent negotiation ability are conducted.