17:35

conference time (CEST, Berlin)

conference time (CEST, Berlin)

Real Time Stress Prediction Using Machine Learning

27/10/2021 17:35 conference time (CEST, Berlin)

Room: M

J. Anthony, M. Rajamony, S. Leblanc, H. Bensoudane, A. Brener (Maya HTT, CAN)

Statistical modeling techniques are gaining popularity as surrogate models to mechanistic modeling such as finite element analyses (FEA). The surrogate aims to map inputs to outputs while the FEA estimates the solution using the governing equations. Previous work has shown that machine learning algorithms have modeled highly non-linear data. Additionally, transfer learning has been a method applied in computer vision and natural language processing to reuse common knowledge learnt from a particular task to a different, but related task. However, the application of such implementations to engineering simulations is only starting to emerge.
The objective of this paper is to demonstrate the relevance of transfer learning using simulation data in order to make real-time accurate life predictions, prepare overhaul schedules and detect anomalies. We developed a fully connected deep neural network for this regression task of fitting data from a finite element model. Subsequently, we applied transfer learning by utilizing scarce field data from the real model, which is alike in geometry, but varying in physical and material properties. In-house tools were deployed to efficiently generate 27,000 source acceleration-stress profiles to train the network as a stress predictor based on acceleration. Furthermore, 6000 target acceleration-stress profiles were computed to adjust the predictor to the target model. Popular python-based machine learning tools were employed to explore a hyperparameter space. This analysis facilitated in recognizing key learning patterns, apprehending knowledge transfer techniques involving freezing, retraining and adding layers to the base neural network and finally in identifying the optimal neural network configuration for the target task. Excellent convergence in loss functions were calculated giving us confidence in the training and transfer learning processes. This research showed that the tuned model is capable of transferring knowledge gained from fitting a large dataset to a different and smaller dataset, thus creating an accurate and useful digital twin from FEA.

Deep Neural Networks, Digital Twin, Finite Element Analyses, Stress Prediction, Transfer Learning, Surrogate

17:55

conference time (CEST, Berlin)

conference time (CEST, Berlin)

AI Powered Product Design

27/10/2021 17:55 conference time (CEST, Berlin)

Room: M

F. Kocer (Altair Engineering, Inc., USA)

Engineering data science is a new but fast growing field of leveraging data science, including machine learning, to improve engineering processes and outcomes. It focuses on how to reduce repetitive, labor-intensive but non-value added tasks in engineering processes as well as improving product design, testing, manufacturing, and operations. Engineers are sufficiently skilled to be data scientists, but with their primary responsibility and focus on their engineering deliverables, they would rather use data science without having to be data scientists themselves. So it is up to software vendors to provide them what they need in a well-integrated, robust fashion. In this presentation, applications of data science for three types of applications will be shown as implemented in users modelling and simulation environment. First one is the use of data science to improve CAE model building process using geometric machine learning within finite element preprocessor. Second and most sought after one is the use of data science to improve product design optimization using field predictions and expert emulation of subjective criteria in the design optimization process. Finally, applications of data science in test rigs for anomaly detection, manufacturing for expert emulation, and digital twin for operations will be covered.

Product Design, Machine Learning, CAE, AI, Optimization, Cloud

18:15

conference time (CEST, Berlin)

conference time (CEST, Berlin)

Application of Machine Learning and CFD to Model the Flow in an Internal Combustion Engine

27/10/2021 18:15 conference time (CEST, Berlin)

Room: M

J. Hodges (Siemens Digital Industries Software, USA); M. Emmanuelli, S. Sathyanandha (Monolith AI, GBR); J. Fernandes (Siemens Digital Industries Software, GBR)

As in many engineering industries, production timelines for internal combustion engines are too strict to allow for full (multi-disciplinary) exploration of design permutations through large volumes of simulation and physical test. This study combines machine learning and CFD simulation for accelerated and intelligent design of an internal combustion engine (ICE) to accommodate such a challenge.
The specimen investigated is a parameterized cylinder port design, in a 4-stroke gasoline engine, which whereby a number of simulations are generated to partially cover the design space. The focus is an inlet port design which creates favorable developments in the turbulent flow-field for more ideal combustion.
With such simulation data generated, neural networks are created to capture the relationship between the design parameters and the performance results (in 1D, 2D, and 3D). For the one-dimensional data, predictions are made for the transient evolution of important scalar performance metrics over an engine cycle, such as turbulent kinetic energy, tumble, and other thermodynamic variables. For the two-dimensional data, predictions are made for local values, similar to the one-dimensional data predictions, in the center plane of the cylinder. Since the design objective is focused on the turbulent flow-field, the three-dimensional data predictions focus on predicting the turbulent kinetic energy in the highly turbulent sections of the flow-field.
These predictions prove to be quite accurate and reveal that neural networks are effective at modeling simulation data for predictive design exploration. Their mathematical structure allows them to capture highly non-linear and multi-variable physical behavior. With such a simulation-machine learning approach, design exploration with a greater concentration on more attractive designs is possible.
These trained neural networks can also be used in design cycles of subsequent similar products, which could expedite early-stage design via transfer learning. To evaluate the transfer learning capabilities for this problem, the simulation data was split for training and validation in such a way that both focused on different flow-field characteristics. Specifically, the training data was comprised of simulation data with port designs that were acute, which created ‘sharp’ angles that resulted in large flow separation upon entry to the cylinder. For the validation dataset, the simulations had intake port designs which were less steep and therefore significantly different in terms of the resulting flow patterns.
Since these flow patterns greatly affect the resulting turbulence and therefore the combustion behavior, it is encouraging that the neural networks were able to accurately predict the ICE simulation results and additionally provides confidence that they can provide further value throughout the design process.

Machine learning, AI, Neural Networks, CFD, ICE, Simulation, Automotive

18:35

conference time (CEST, Berlin)

conference time (CEST, Berlin)

Machine Learning for Stress Hot Spot Recognition

27/10/2021 18:35 conference time (CEST, Berlin)

Room: M

F.Cordisco, F. Dri (Dassault Systemes, USA)

Undesired stress hot spots caused by incorrect modeling practices often result in failed analyses, longer runtimes or inaccurate results. Technologies such as error indicators, discontinuous stress plots or stress hot spot detection can identify stress concentration but cannot classify it by the type or impact to the analysis. Hence requiring expensive mesh convergence analyses or expert revision that slows down the production process.
In this Article we propose a novel technique where machine learning, paired with image recognition, is used to identify and classify stress concentration by analyzing the fine subtleties that exist between different stress field results. In particular, we focus on the problem of contact stress hot spots caused by contact point loads against smoothly distributed stress fields.
Using a convolutional deep neural network (CNN) in a proof of concept scenario, we evaluate the technology under different stress concentration patterns with varying mesh densities and find a high level of accuracy (97.5%) at identifying stress concentration from contact point loads over 5-fold cross validation testing. The CNN is able to differentiate stress localization patterns generated from sharp corners or boundary localization from those associated with contact point loads.
By using open source neural network libraries (e.g. Keras with Tensorflow) we also find that properly trained neural networks replace the complex rules needed for automatic post-processing (which are usually problem dependent) and provides a unified framework that is easy to maintain and expand.
Finally, we discuss how the method can be applied as an assistant technology by experienced or inexperienced engineers for the early detection of underlying issues with FEA in simple or assembled components. Also, how some of the limitations imposed by the use of image recognition as feature mapping can be improved with a distortion grid mapping technique capable of analyzing stress in the bulk of the solid, as well as handling highly distorted geometries on complex parts.

stress hot spot, localization, contact localization, error indicator, stress gradient, neural networks, deep learning, CNN, stress concentration, sparse stress distribution