Abstract:Variational Autoencoders (VAEs) are a powerful framework for learning compact latent representations, while NeuralODEs excel in learning transient system dynamics. This work combines the strengths of both to create fast surrogate models with adjustable complexity. By leveraging the VAE's dimensionality reduction using a non-hierarchical prior, our method adaptively assigns stochastic noise, naturally complementing known NeuralODE training enhancements and enabling probabilistic time series modeling. We show that standard Latent ODEs struggle with dimensionality reduction in systems with time-varying inputs. Our approach mitigates this by continuously propagating variational parameters through time, establishing fixed information channels in latent space. This results in a flexible and robust method that can learn different system complexities, e.g. deep neural networks or linear matrices. Hereby, it enables efficient approximation of the Koopman operator without the need for predefining its dimensionality. As our method balances dimensionality reduction and reconstruction accuracy, we call it Balanced Neural ODE (B-NODE). We demonstrate the effectiveness of this method on academic test cases and apply it to a real-world example of a thermal power plant.
Abstract:We provide a dataset for enabling Deep Generative Models (DGMs) in engineering design and propose methods to automate data labeling by utilizing large-scale foundation models. GeoBiked is curated to contain 4 355 bicycle images, annotated with structural and technical features and is used to investigate two automated labeling techniques: The utilization of consolidated latent features (Hyperfeatures) from image-generation models to detect geometric correspondences (e.g. the position of the wheel center) in structural images and the generation of diverse text descriptions for structural images. GPT-4o, a vision-language-model (VLM), is instructed to analyze images and produce diverse descriptions aligned with the system-prompt. By representing technical images as Diffusion-Hyperfeatures, drawing geometric correspondences between them is possible. The detection accuracy of geometric points in unseen samples is improved by presenting multiple annotated source images. GPT-4o has sufficient capabilities to generate accurate descriptions of technical images. Grounding the generation only on images leads to diverse descriptions but causes hallucinations, while grounding it on categorical labels restricts the diversity. Using both as input balances creativity and accuracy. Successfully using Hyperfeatures for geometric correspondence suggests that this approach can be used for general point-detection and annotation tasks in technical images. Labeling such images with text descriptions using VLMs is possible, but dependent on the models detection capabilities, careful prompt-engineering and the selection of input information. Applying foundation models in engineering design is largely unexplored. We aim to bridge this gap with a dataset to explore training, finetuning and conditioning DGMs in this field and suggesting approaches to bootstrap foundation models to process technical images.
Abstract:The implicit assumption that human and autonomous agents have certain capabilities is omnipresent in modern teaming concepts. However, none formalize these capabilities in a flexible and quantifiable way. In this paper, we propose Capability Deltas, which establish a quantifiable source to craft autonomous assistance systems in which one agent takes the leader and the other the supporter role. We deduct the quantification of human capabilities based on an established assessment and documentation procedure from occupational inclusion of people with disabilities. This allows us to quantify the delta, or gap, between a team's current capability and a requirement established by a work process. The concept is then extended to the multi-dimensional capability space, which then allows to formalize compensation behavior and assess required actions by the autonomous agent.
Abstract:Recent advancements in image synthesis are fueled by the advent of large-scale diffusion models. Yet, integrating realistic object visualizations seamlessly into new or existing backgrounds without extensive training remains a challenge. This paper introduces InsertDiffusion, a novel, training-free diffusion architecture that efficiently embeds objects into images while preserving their structural and identity characteristics. Our approach utilizes off-the-shelf generative models and eliminates the need for fine-tuning, making it ideal for rapid and adaptable visualizations in product design and marketing. We demonstrate superior performance over existing methods in terms of image realism and alignment with input conditions. By decomposing the generation task into independent steps, InsertDiffusion offers a scalable solution that extends the capabilities of diffusion models for practical applications, achieving high-quality visualizations that maintain the authenticity of the original objects.
Abstract:The synthesis of product design concepts stands at the crux of early-phase development processes for technical products, traditionally posing an intricate interdisciplinary challenge. The application of deep learning methods, particularly Deep Generative Models (DGMs), holds the promise of automating and streamlining manual iterations and therefore introducing heightened levels of innovation and efficiency. However, DGMs have yet to be widely adopted into the synthesis of product design concepts. This paper aims to explore the reasons behind this limited application and derive the requirements for successful integration of these technologies. We systematically analyze DGM-families (VAE, GAN, Diffusion, Transformer, Radiance Field), assessing their strengths, weaknesses, and general applicability for product design conception. Our objective is to provide insights that simplify the decision-making process for engineers, helping them determine which method might be most effective for their specific challenges. Recognizing the rapid evolution of this field, we hope that our analysis contributes to a fundamental understanding and guides practitioners towards the most promising approaches. This work seeks not only to illuminate current challenges but also to propose potential solutions, thereby offering a clear roadmap for leveraging DGMs in the realm of product design conception.
Abstract:One of the core concepts in science, and something that happens intuitively in every-day dynamic systems modeling, is the combination of models or methods. Especially in dynamical systems modeling, often two or more structures are combined to obtain a more powerful or efficient architecture regarding a specific application (area). Further, even physical simulations are combined with machine learning architectures, to increase prediction accuracy or optimize the computational performance. In this work, we shortly discuss, which types of models are usually combined and propose a model interface that is capable of expressing a width variety of mixed algebraic, discrete and differential equation based models. Further, we examine different established, as well as new ways of combining these models from a system theoretical point of view and highlight two challenges - algebraic loops and local event affect functions in discontinuous models - that require a special approach. Finally, we propose a new wildcard topology, that is capable of describing the generic connection between two combined models in an easy to interpret fashion that can be learned as part of a gradient based optimization procedure. The contributions of this paper are highlighted at a proof of concept: Different connection topologies between two models are learned, interpreted and compared applying the proposed methodology and software implementation.
Abstract:In autonomous driving, accurately interpreting the movements of other road users and leveraging this knowledge to forecast future trajectories is crucial. This is typically achieved through the integration of map data and tracked trajectories of various agents. Numerous methodologies combine this information into a singular embedding for each agent, which is then utilized to predict future behavior. However, these approaches have a notable drawback in that they may lose exact location information during the encoding process. The encoding still includes general map information. However, the generation of valid and consistent trajectories is not guaranteed. This can cause the predicted trajectories to stray from the actual lanes. This paper introduces a new refinement module designed to project the predicted trajectories back onto the actual map, rectifying these discrepancies and leading towards more consistent predictions. This versatile module can be readily incorporated into a wide range of architectures. Additionally, we propose a novel scene encoder that handles all relations between agents and their environment in a single unified heterogeneous graph attention network. By analyzing the attention values on the different edges in this graph, we can gain unique insights into the neural network's inner workings leading towards a more explainable prediction.
Abstract:Using vanilla NeuralODEs to model large and/or complex systems often fails due two reasons: Stability and convergence. NeuralODEs are capable of describing stable as well as instable dynamic systems. Selecting an appropriate numerical solver is not trivial, because NeuralODE properties change during training. If the NeuralODE becomes more stiff, a suboptimal solver may need to perform very small solver steps, which significantly slows down the training process. If the NeuralODE becomes to instable, the numerical solver might not be able to solve it at all, which causes the training process to terminate. Often, this is tackled by choosing a computational expensive solver that is robust to instable and stiff ODEs, but at the cost of a significantly decreased training performance. Our method on the other hand, allows to enforce ODE properties that fit a specific solver or application-related boundary conditions. Concerning the convergence behavior, NeuralODEs often tend to run into local minima, especially if the system to be learned is highly dynamic and/or oscillating over multiple periods. Because of the vanishing gradient at a local minimum, the NeuralODE is often not capable of leaving it and converge to the right solution. We present a technique to add knowledge of ODE properties based on eigenvalues - like (partly) stability, oscillation capability, frequency, damping and/or stiffness - to the training objective of a NeuralODE. We exemplify our method at a linear as well as a nonlinear system model and show, that the presented training process is far more robust against local minima, instabilities and sparse data samples and improves training convergence and performance.
Abstract:The term NeuralODE describes the structural combination of an Artifical Neural Network (ANN) and a numerical solver for Ordinary Differential Equations (ODEs), the former acts as the right-hand side of the ODE to be solved. This concept was further extended by a black-box model in the form of a Functional Mock-up Unit (FMU) to obtain a subclass of NeuralODEs, named NeuralFMUs. The resulting structure features the advantages of first-principle and data-driven modeling approaches in one single simulation model: A higher prediction accuracy compared to conventional First Principle Models (FPMs), while also a lower training effort compared to purely data-driven models. We present an intuitive workflow to setup and use NeuralFMUs, enabling the encapsulation and reuse of existing conventional models exported from common modeling tools. Moreover, we exemplify this concept by deploying a NeuralFMU for a consumption simulation based on a Vehicle Longitudinal Dynamics Model (VLDM), which is a typical use case in automotive industry. Related challenges that are often neglected in scientific use cases, like real measurements (e.g. noise), an unknown system state or high-frequent discontinuities, are handled in this contribution. For the aim to build a hybrid model with a higher prediction quality than the original FPM, we briefly highlight two open-source libraries: FMI.jl for integrating FMUs into the Julia programming environment, as well as an extension to this library called FMIFlux.jl, that allows for the integration of FMUs into a neural network topology to finally obtain a NeuralFMU.
Abstract:Imitation Learning from observation describes policy learning in a similar way to human learning. An agent's policy is trained by observing an expert performing a task. While many state-only imitation learning approaches are based on adversarial imitation learning, one main drawback is that adversarial training is often unstable and lacks a reliable convergence estimator. If the true environment reward is unknown and cannot be used to select the best-performing model, this can result in bad real-world policy performance. We propose a non-adversarial learning-from-observations approach, together with an interpretable convergence and performance metric. Our training objective minimizes the Kulback-Leibler divergence (KLD) between the policy and expert state transition trajectories which can be optimized in a non-adversarial fashion. Such methods demonstrate improved robustness when learned density models guide the optimization. We further improve the sample efficiency by rewriting the KLD minimization as the Soft Actor Critic objective based on a modified reward using additional density models that estimate the environment's forward and backward dynamics. Finally, we evaluate the effectiveness of our approach on well-known continuous control environments and show state-of-the-art performance while having a reliable performance estimator compared to several recent learning-from-observation methods.