M12
Engineering Data Science 2

Back to overview

13:20
conference time (CEST, Berlin)
Analysis of Numerical Crash Simulation Data Using Dimensionality Reduction and Machine Learning
27/10/2021 13:20 conference time (CEST, Berlin)
Room: M
N. Ballal, M. Dlugosch (Fraunhofer EMI, DEU)
N. Ballal, M. Dlugosch (Fraunhofer EMI, DEU)
With the increasing virtualization of automotive R&D processes, analyzing the growing amounts of numerical simulation data produced is becoming more and more challenging. A considerable amount of resources are spent to extract underlying knowledge about the crash behavior under certain input parameters. This research proposes data science methods to semiautomatically analyze numerical simulation data. The goals of this research include comparing different dimensionality reduction algorithms to represent simulation data as lower-dimensional embeddings, clustering algorithms to cluster simulations displaying similar crash behavioral patterns, finding causes for a certain behavioral pattern, and discovering design rules to avoid undesired behavioral patterns. The lightweight lower-dimensional embedding of the simulations is represented using feature extraction dimensionality reduction methods like Principal Component Analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE). The simulations’ underlying knowledge is extracted from the lower-dimensional embedding and similar simulations are clustered based on their behavioral patterns by using unsupervised clustering algorithms like k-means, Balanced Iterative Reducing and clustering using Hierarchies (BIRCH). These behavioral patterns can be analyzed and the importance of the input parameters causing a certain behavioral pattern is obtained using random forests. Finally, the design rules to avoid an undesired behavioral pattern are extracted using decision tree algorithms and associative rule mining. The workflow is applied to a simple side pole impact simulation model, where a generic vehicle floor structure is impacted by a pole. To create a dataset, extensive simulations are carried out by varying the values of the input parameters using a Latin-Hypercube DoE scheme. Best results were obtained using a combination of a linear dimensionality reduction algorithm and a hybrid clustering algorithm, which yielded three different behavioral patterns containing 90 % of the original information respresented in 50 dimensions. The obtained results from rule extraction confirm the rules anticipated by domain experts to avoid a non-desired cluster. These design rules can assist engineers in efficiently defining successful future designs.
Automotive, Crash Simulation, Data Analysis, Machine Learning, Dimensionality Reduction
13:40
conference time (CEST, Berlin)
Anomaly Detection with Generative Adversarial Networks (GANs) Leveraging Failure Simulation Data
27/10/2021 13:40 conference time (CEST, Berlin)
Room: M
R. Duquette (Maya HTT, CAN)
R. Duquette (Maya HTT, CAN)
The advent of attention neural networks architecture and models, or more generally generative adversarial networks (GANs), have opened new avenues for training neural networks from low volume of data in a truly efficient way to gain physics based predictive capabilities. One of the key practical obstacle for leveraging engineering simulation results to train neural networks is the requirement for running a significant number of FEA or CFD solver simulations to reach a useful AI-based level of predicting capability level. The ability of training neural networks does exist, but the volume of FEA or CFD results needed limits us to very few practical engineering problems or to problems for which a system-level physics model is accurate enough. The GAN framework introduced by Ian J. Goodfellow in 2014 for estimating generative models via an adversarial process can be leveraged to significantly reduce the number of FEA or CFD solver simulations while resulting in very accurate results. In this paper, we show some practical applications where the adversarial framework is used to create reduced order models and virtual sensors from complex engineering simulations in a practical and manageable timeframe and with more reasonable compute resources. Moreover, the resulting AI agents (deep neural networks) can serve as an engineering basis to detect anomalies in real world applications by contrasting engineering simulation model results and real-time telemetry. The unique combination of GAN-based reduced order models or virtual sensors learning from engineering solvers can also recognize the onset of specific failure modes and allow time for operations to change course and avoid failures. We will outline how the generative model captures the engineering simulation data distribution, and how the discriminative model estimates the probability that a new sample came from the training data, providing a great foundation for leveraging failure mode simulations in real world applications.
AI, reduced order models, virtual sensors, failure simulation
14:00
conference time (CEST, Berlin)
Combining Reduced-order Modeling and Machine Learning for Local-global Simulation: Short Circuit Prediction in Electric Vehicle Crash Simulation
27/10/2021 14:00 conference time (CEST, Berlin)
Room: M
F. Daim, A.Dumon (ESI Group, FRA); N. Hascoët (ENSAM, FRA); M. Andres (Volkswagen, DEU); C. Breitfuss (Virtual Vehicle Research, AUT); E. Cortelletti (C.R.F. S.C.p.A, ITA); C. Jimenez (Applus+ IDIADA Group, ESP); F. Chinesta (ESI/ENSAM, FRA)
F. Daim, A.Dumon (ESI Group, FRA); N. Hascoët (ENSAM, FRA); M. Andres (Volkswagen, DEU); C. Breitfuss (Virtual Vehicle Research, AUT); E. Cortelletti (C.R.F. S.C.p.A, ITA); C. Jimenez (Applus+ IDIADA Group, ESP); F. Chinesta (ESI/ENSAM, FRA)
Today, simulation is used by automotive engineering teams to design lightweight vehicle bodies that fulfill vehicle safety regulations and perform well in consumer testing. Legislation is rapidly evolving for the growing electric vehicle (EV) market, not least to account for the additional battery safety requirements. One of the most important safety considerations is the need to avoid internal short circuits (SCs) in the battery cell due to internal damage that can occur during a crash scenario. Such SCs represent a significant fire hazard. Assessing SC risk in crash simulation at the vehicle level is a complex task as it involves phenomena at different scales. During the impact, the vehicle deforms on a macroscale level, whereas the battery cells deform locally, damaging the extremely thin separator foil which leads finally to a SC. To evaluate accurately the SC risk, it is therefore necessary to consider the behaviour on the (local) meso-scale. Integrating the meso-scale cell description in crash simulations is unfeasible due to the model size leading to a highly decreased stable timestep imposed by the explicit time discretisation. To account for the localised behaviour, a novel combination of reduced-order modeling (ROM) techniques and machine learning (ML) methods are used to bridge the diverse length scales in such simulations. Realistic boundary conditions on the cell level are obtained from standard EV crash simulations and cascaded down to the cells. A first application of ROM using the sparse proper generalised decomposition yields an enriched set of boundary conditions that can be applied to the cell model. Another ROM technique, incremental dynamic mode decomposition, that is well-adapted to treating large numbers of parameters, is applied to compute an equivalent stiffness of a representative macro-cell. Finally, the SC risk is evaluated by using ML which links the SC of the meso-cell to the macro-cell mechanical behaviour.
Crash, Electric Vehicle, Reduced Order Modeling, Machine Learning, Dynamic
14:20
conference time (CEST, Berlin)
Deep Convolutional Neural Networks for Lid Driven Cavity Flows
27/10/2021 14:20 conference time (CEST, Berlin)
Room: M
B. Karnam (Tata Consultancy Services, IND)
B. Karnam (Tata Consultancy Services, IND)
Simulations play a key role in product development and their usage has exponentially increased in the past decade. This has created the ever-increasing need for reducing the simulation execution time so that better product designs can be achieved. Traditionally, higher CPU cores have been used to achieve this purpose. But, the limitations in current approaches do not lead to linear scalability and often the high-fidelity simulations are run over several days or weeks. On the other hand, deep neural networks have proven to be good approximators for predicting solutions for given inputs in a number of domains. Deep convolutional networks, one of the kinds of deep neural networks, have proven to be a viable approach for learning image representations. Recently, Deep convolutional networks have been applied for predicting fluid flows with good success [Guo et al 2016, Hennigh 2017, Ribeiro et al 2020]. The present work is to extend the deep convolutional networks to 2D lid driven cavity flows with an additional object inside the cavity. A square cavity of side 0.2 m is considered. Random shapes of triangular and rectangular objects are placed at the centre of cavity leading to 600 different configurations. The top boundary moves at a uniform speed of 50 m/s corresponding to a Reynolds number of 1000. Gmsh, an open-source meshing software, is used to create triangular meshes and the OpenFOAM solver icoFoam is used to evaluate the incompressible laminar fluid flow in the cavity. The data from these 600 simulations are extracted in the form of shape distance function to represent the input domain. The resulting velocity components in x and y directions are used as the target values. A U-net architecture with skip connections is used to approximate the fluid flow. The U-net architecture contains an encoder and decoder part with skip connections from encoder to decoder. This architecture was first proposed for medical image segmentation and has since been applied to various other fields. In the current approach, the encoder contains 3*2 convolution layers followed by max pooling layers. The decoder part contains transpose convolution layers with appropriate zero padding to match the input dimensions. The network was trained with a learning rate of 1e-4. The input data from the 600 simulations was split into three parts for training, testing and validation purposes. The results comparing the streamlines for different shapes from both OpenFOAM and U-net architecture are highlighted. The opportunities for additional improvements are discussed.
Deep Convolution Neural Networks, Lid Driven Cavity Flows, U net for fluid flow, deep learning for fluid flow
×

[TITLE]

[LISTING

[ABSTRACT]

[DATE]

[ROOM]

[KEYWORDS]