Abstract:Model-driven engineering (MDE) simplifies software development through abstraction, yet challenges such as time constraints, incomplete domain understanding, and adherence to syntactic constraints hinder the design process. This paper presents a study to evaluate the usefulness of a novel approach utilizing large language models (LLMs) and few-shot prompt learning to assist in domain modeling. The aim of this approach is to overcome the need for extensive training of AI-based completion models on scarce domain-specific datasets and to offer versatile support for various modeling activities, providing valuable recommendations to software modelers. To support this approach, we developed MAGDA, a user-friendly tool, through which we conduct a user study and assess the real-world applicability of our approach in the context of domain modeling, offering valuable insights into its usability and effectiveness.
Abstract:Human guidance is often desired in reinforcement learning to improve the performance of the learning agent. However, human insights are often mere opinions and educated guesses rather than well-formulated arguments. While opinions are subject to uncertainty, e.g., due to partial informedness or ignorance about a problem, they also emerge earlier than hard evidence could be produced. Thus, guiding reinforcement learning agents through opinions offers the potential for more performant learning processes, but comes with the challenge of modeling and managing opinions in a formal way. In this article, we present a method to guide reinforcement learning agents through opinions. To this end, we provide an end-to-end method to model and manage advisors' opinions. To assess the utility of the approach, we evaluate it with synthetic and human advisors, at different levels of uncertainty, and under multiple advise strategies. Our results indicate that opinions, even if uncertain, improve the performance of reinforcement learning agents, resulting in higher rewards, more efficient exploration, and a better reinforced policy. Although we demonstrate our approach in a simplified topological running example, our approach is applicable to complex problems with higher dimensions as well.
Abstract:By organizing knowledge within a research field, Systematic Reviews (SR) provide valuable leads to steer research. Evidence suggests that SRs have become first-class artifacts in software engineering. However, the tedious manual effort associated with the screening phase of SRs renders these studies a costly and error-prone endeavor. While screening has traditionally been considered not amenable to automation, the advent of generative AI-driven chatbots, backed with large language models is set to disrupt the field. In this report, we propose an approach to leverage these novel technological developments for automating the screening of SRs. We assess the consistency, classification performance, and generalizability of ChatGPT in screening articles for SRs and compare these figures with those of traditional classifiers used in SR automation. Our results indicate that ChatGPT is a viable option to automate the SR processes, but requires careful considerations from developers when integrating ChatGPT into their SR tools.