Abstract:Mechanisms are designed to perform functions in various fields. Often, there is no unique mechanism that performs a well-defined function. For example, vehicle suspensions are designed to improve driving performance and ride comfort, but different types are available depending on the environment. This variability in design makes performance comparison difficult. Additionally, the traditional design process is multi-step, gradually reducing the number of design candidates while performing costly analyses to meet target performance. Recently, AI models have been used to reduce the computational cost of FEA. However, there are limitations in data availability and different analysis environments, especially when transitioning from low-fidelity to high-fidelity analysis. In this paper, we propose a multi-fidelity design framework aimed at recommending optimal types and designs of mechanical mechanisms. As an application, vehicle suspension systems were selected, and several types were defined. For each type, mechanism parameters were generated and converted into 3D CAD models, followed by low-fidelity rigid body dynamic analysis under driving conditions. To effectively build a deep learning-based multi-fidelity surrogate model, the results of the low-fidelity analysis were analyzed using DBSCAN and sampled at 5% for high-cost flexible body dynamic analysis. After training the multi-fidelity model, a multi-objective optimization problem was formulated for the performance metrics of each suspension type. Finally, we recommend the optimal type and design based on the input to optimize ride comfort-related performance metrics. To validate the proposed methodology, we extracted basic design rules of Pareto solutions using data mining techniques. We also verified the effectiveness and applicability by comparing the results with those obtained from a conventional deep learning-based design process.
Abstract:This study aims to overcome the conventional deep-learning approaches based on convolutional neural networks, whose applicability to complex geometries and unstructured meshes is limited due to their inherent mesh dependency. We propose novel approaches to improve mesh-agnostic spatio-temporal prediction of transient flow fields using graph U-Nets, enabling accurate prediction on diverse mesh configurations. Key enhancements to the graph U-Net architecture, including the Gaussian mixture model convolutional operator and noise injection approaches, provide increased flexibility in modeling node dynamics: the former reduces prediction error by 95\% compared to conventional convolutional operators, while the latter improves long-term prediction robustness, resulting in an error reduction of 86\%. We also investigate transductive and inductive-learning perspectives of graph U-Nets with proposed improvements. In the transductive setting, they effectively predict quantities for unseen nodes within the trained graph. In the inductive setting, they successfully perform in mesh scenarios with different vortex-shedding periods, showing 98\% improvement in predicting the future flow fields compared to a model trained without the inductive settings. It is found that graph U-Nets without pooling operations, i.e. without reducing and restoring the node dimensionality of the graph data, perform better in inductive settings due to their ability to learn from the detailed structure of each graph. Meanwhile, we also discover that the choice of normalization technique significantly impacts graph U-Net performance.
Abstract:In engineering design, surrogate models are widely employed to replace computationally expensive simulations by leveraging design variables and geometric parameters from computer-aided design (CAD) models. However, these models often lose critical information when simplified to lower dimensions and face challenges in parameter definition, especially with the complex 3D shapes commonly found in industrial datasets. To address these limitations, we propose a Bayesian graph neural network (GNN) framework for a 3D deep-learning-based surrogate model that predicts engineering performance by directly learning geometric features from CAD using mesh representation. Our framework determines the optimal size of mesh elements through Bayesian optimization, resulting in a high-accuracy surrogate model. Additionally, it effectively handles the irregular and complex structures of 3D CADs, which differ significantly from the regular and uniform pixel structures of 2D images typically used in deep learning. Experimental results demonstrate that the quality of the mesh significantly impacts the prediction accuracy of the surrogate model, with an optimally sized mesh achieving superior performance. We compare the performance of models based on various 3D representations such as voxel, point cloud, and graph, and evaluate the computational costs of Monte Carlo simulation and Bayesian optimization methods to find the optimal mesh size. We anticipate that our proposed framework has the potential to be applied to mesh-based simulations across various engineering fields, leveraging physics-based information commonly used in computer-aided engineering.
Abstract:Generative Design (GD) has evolved as a transformative design approach, employing advanced algorithms and AI to create diverse and innovative solutions beyond traditional constraints. Despite its success, GD faces significant challenges regarding the manufacturability of complex designs, often necessitating extensive manual modifications due to limitations in standard manufacturing processes and the reliance on additive manufacturing, which is not ideal for mass production. Our research introduces an innovative framework addressing these manufacturability concerns by integrating constraints pertinent to die casting and injection molding into GD, through the utilization of 2D depth images. This method simplifies intricate 3D geometries into manufacturable profiles, removing unfeasible features such as non-manufacturable overhangs and allowing for the direct consideration of essential manufacturing aspects like thickness and rib design. Consequently, designs previously unsuitable for mass production are transformed into viable solutions. We further enhance this approach by adopting an advanced 2D generative model, which offer a more efficient alternative to traditional 3D shape generation methods. Our results substantiate the efficacy of this framework, demonstrating the production of innovative, and, importantly, manufacturable designs. This shift towards integrating practical manufacturing considerations into GD represents a pivotal advancement, transitioning from purely inspirational concepts to actionable, production-ready solutions. Our findings underscore usefulness and potential of GD for broader industry adoption, marking a significant step forward in aligning GD with the demands of manufacturing challenges.
Abstract:Mechanisms are essential components designed to perform specific tasks in various mechanical systems. However, designing a mechanism that satisfies certain kinematic or quasi-static requirements is a challenging task. The kinematic requirements may include the workspace of a mechanism, while the quasi-static requirements of a mechanism may include its torque transmission, which refers to the ability of the mechanism to transfer power and torque effectively. In this paper, we propose a deep learning-based generative model for generating multiple crank-rocker four-bar linkage mechanisms that satisfy both the kinematic and quasi-static requirements aforementioned. The proposed model is based on a conditional generative adversarial network (cGAN) with modifications for mechanism synthesis, which is trained to learn the relationship between the requirements of a mechanism with respect to linkage lengths. The results demonstrate that the proposed model successfully generates multiple distinct mechanisms that satisfy specific kinematic and quasi-static requirements. To evaluate the novelty of our approach, we provide a comparison of the samples synthesized by the proposed cGAN, traditional cVAE and NSGA-II. Our approach has several advantages over traditional design methods. It enables designers to efficiently generate multiple diverse and feasible design candidates while exploring a large design space. Also, the proposed model considers both the kinematic and quasi-static requirements, which can lead to more efficient and effective mechanisms for real-world use, making it a promising tool for linkage mechanism design.
Abstract:The product design process in manufacturing involves iterative design modeling and analysis to achieve the target engineering performance, but such an iterative process is time consuming and computationally expensive. Recently, deep learning-based engineering performance prediction models have been proposed to accelerate design optimization. However, they only guarantee predictions on training data and may be inaccurate when applied to new domain data. In particular, 3D design data have complex features, which means domains with various distributions exist. Thus, the utilization of deep learning has limitations due to the heavy data collection and training burdens. We propose a bi-weighted unsupervised domain adaptation approach that considers the geometry features and engineering performance of 3D design data. It is specialized for deep learning-based engineering performance predictions. Domain-invariant features can be extracted through an adversarial training strategy by using hypothesis discrepancy, and a multi-output regression task can be performed with the extracted features to predict the engineering performance. In particular, we present a source instance weighting method suitable for 3D design data to avoid negative transfers. The developed bi-weighting strategy based on the geometry features and engineering performance of engineering structures is incorporated into the training process. The proposed model is tested on a wheel impact analysis problem to predict the magnitude of the maximum von Mises stress and the corresponding location of 3D road wheels. This mechanism can reduce the target risk for unlabeled target domains on the basis of weighted multi-source domain knowledge and can efficiently replace conventional finite element analysis.
Abstract:Surrogate model-based optimization has been increasingly used in the field of engineering design. It involves creating a surrogate model with objective functions or constraints based on the data obtained from simulations or real-world experiments, and then finding the optimal solution from the model using numerical optimization methods. Recent advancements in deep learning-based inverse design methods have made it possible to generate real-time optimal solutions for engineering design problems, eliminating the requirement for iterative optimization processes. Nevertheless, no comprehensive study has yet closely examined the specific advantages and disadvantages of this novel approach compared to the traditional design optimization method. The objective of this paper is to compare the performance of traditional design optimization methods with deep learning-based inverse design methods by employing benchmark problems across various scenarios. Based on the findings of this study, we provide guidelines that can be taken into account for the future utilization of deep learning-based inverse design. It is anticipated that these guidelines will enhance the practical applicability of this approach to real engineering design problems.
Abstract:Neural network (NN) ensembles can reduce large prediction variance of NN and improve prediction accuracy. For highly nonlinear problems with insufficient data set, the prediction accuracy of NN models becomes unstable, resulting in a decrease in the accuracy of ensembles. Therefore, this study proposes a frequency distribution-based ensemble that identifies core prediction values, which are expected to be concentrated near the true prediction value. The frequency distribution-based ensemble classifies core prediction values supported by multiple prediction values by conducting statistical analysis with a frequency distribution, which is based on various prediction values obtained from a given prediction point. The frequency distribution-based ensemble can improve predictive performance by excluding prediction values with low accuracy and coping with the uncertainty of the most frequent value. An adaptive sampling strategy that sequentially adds samples based on the core prediction variance calculated as the variance of the core prediction values is proposed to improve the predictive performance of the frequency distribution-based ensemble efficiently. Results of various case studies show that the prediction accuracy of the frequency distribution-based ensemble is higher than that of Kriging and other existing ensemble methods. In addition, the proposed adaptive sampling strategy effectively improves the predictive performance of the frequency distribution-based ensemble compared with the previously developed space-filling and prediction variance-based strategies.
Abstract:Topology optimization (TO) is a method of deriving an optimal design that satisfies a given load and boundary conditions within a design domain. This method enables effective design without initial design, but has been limited in use due to high computational costs. At the same time, machine learning (ML) methodology including deep learning has made great progress in the 21st century, and accordingly, many studies have been conducted to enable effective and rapid optimization by applying ML to TO. Therefore, this study reviews and analyzes previous research on ML-based TO (MLTO). Two different perspectives of MLTO are used to review studies: (1) TO and (2) ML perspectives. The TO perspective addresses "why" to use ML for TO, while the ML perspective addresses "how" to apply ML to TO. In addition, the limitations of current MLTO research and future research directions are examined.
Abstract:The impact performance of the wheel during wheel development must be ensured through a wheel impact test for vehicle safety. However, manufacturing and testing a real wheel take a significant amount of time and money because developing an optimal wheel design requires numerous iterative processes of modifying the wheel design and verifying the safety performance. Accordingly, the actual wheel impact test has been replaced by computer simulations, such as Finite Element Analysis (FEA), but it still requires high computational costs for modeling and analysis. Moreover, FEA experts are needed. This study presents an aluminum road wheel impact performance prediction model based on deep learning that replaces the computationally expensive and time-consuming 3D FEA. For this purpose, 2D disk-view wheel image data, 3D wheel voxel data, and barrier mass value used for wheel impact test are utilized as the inputs to predict the magnitude of maximum von Mises stress, corresponding location, and the stress distribution of 2D disk-view. The wheel impact performance prediction model can replace the impact test in the early wheel development stage by predicting the impact performance in real time and can be used without domain knowledge. The time required for the wheel development process can be shortened through this mechanism.