F17
System Level Simulation 3

Back to overview

13:05
conference time (CEST, Berlin)
Advances in Collaborative Multidisciplinary Simulation for Aircraft Preliminary Design
28/10/2021 13:05 conference time (CEST, Berlin)
Room: F
E. Moerland (DLR - Deutsches Zentrum für Luft- und Raumfahrt, DEU)
E. Moerland (DLR - Deutsches Zentrum für Luft- und Raumfahrt, DEU)
To create potential solutions for the increasing demands of the community on future air vehicle configurations, a systematic and parallel consideration of a large amount of disciplines is required. To foster collaboration between heterogeneous knowledge bearers, the German Aerospace Center (DLR) has continuously increased its effort in the digitalization and automation of engineering knowledge over the last decade. In pursuing the goal of developing enhanced virtual aircraft configurations, the underlying design and integration methods have been matured and extensively tested, nowadays resulting in loosely-coupled multidisciplinary simulation workflows encompassing a steadily increasing number of tools. Within the dedicated, decentralized network of competences, these tools stem from within DLR and across company borders. Although successfully applied to combine the required knowledge within a series of projects both within DLR and within EU-wide consortia, further improvement opportunities have been identified. This paper provides a general overview of the developments and encountered pitfalls concerning collaborative multidisciplinary simulation for aircraft preliminary design thus far. Based on the gained experience, opportunities for increasing the transparency, further flexibilization and time-reduction of the design process are identified and discussed. Although a link to the current development of model-based systems engineering (MBSE) techniques for streamlining the orchestration of the design process will be shown, this paper focuses on the development of simulation processes for multidisciplinary design and optimization (MDO). To enable the interconnection of established software tools of the heterogeneous specialists within the design process, these are wrapped to a central data exchange format and made batch-executable. The DLR-established data format “Common Parametric Aircraft Configuration Schema (CPACS)” provides the common language for standardized parameter exchange and allows for a considerable reduction in the number of modeled interfaces between the tools. The automated tools are provided as engineering services, hosted on dedicated servers at the respective organizations. These are integrated into executable simulation workflows within the Remote Component Environment (RCE), a process integration software. Setting-up such workflows has initially been done using manually executed methods from the field of systems engineering. An example is the manual creation of design structure matrices to identify engineering service dependencies. Using the latest developments, workflow creation is automated using the MDAO workflow Design Accelerator software (MDAx), utilizing techniques for obtaining an efficient routing of parameters through the engineering services. Additionally, the network of competences has been considerably extended by enabling the automated, cross-company exchange of information within the aircraft design community. Currently, multi-tier design processes, in which knowledge models are shared as black-boxes or on the form of response surface models, are established to create and analyze promising technologies to be embedded in future aircraft system architectures. Currently, the simulation of future hybrid-electric configurations using well-organized simulation workflow orchestration techniques is at the heart of the design studies performed. What will future design methodologies look like? Increased effort is put in developing methods to create novel knowledge-based engineering services, which can be flexibly adjusted according to the needs of the design team. To achieve this, a framework is constructed which combines ontology-, inference- and rule-based modeling techniques. Next to improving the way in which engineering services are created, an increased amount of result data is being generated. Furthermore, the availability of real-life physics-based data through the adoption of digital twins is close to realization. Research effort is invested in finding ways to cope with the increased amount of data. Can a proper combination of data driven analysis with physics-based models provide a solid basis for improving the decision-making process?
Collaborative engineering, Multidisciplinary simulation, Aircraft preliminary design methods, Knowledge-Based-Engineering
13:25
conference time (CEST, Berlin)
Predictive Maintenance for Wind Farms: Combining ML and Physics-based Modelling to Reduce Wind Turbine Asset Downtime
28/10/2021 13:25 conference time (CEST, Berlin)
Room: F
M. Cameron (ESI Group, FRA); A. Alsaab, ESI, R. Said (ESI, GBR)
M. Cameron (ESI Group, FRA); A. Alsaab, ESI, R. Said (ESI, GBR)
Wind energy is one of the leading (and fastest-growing) sources of renewable energy and represents a vital part towards the goal of realising a world powered by green energy. Despite their long history, the profitability of wind turbines is still highly dependent on their operational expenditure, notably the cost of asset maintenance. The asset downtime resulting from the failure of key turbine components, such as generators and gearboxes, means that a single fault can result in a significant reduction in energy production. Traditionally, OEM-installed SCADA systems in wind turbines provide limited information about the current state of the various subsystems. The low frequency (typically every 10 minutes) and averaged nature of data acquisition is a prohibitive factor to the construction of intelligent models. The difficulty, and risk, of retrofitting legacy wind turbine assets with new, high-frequency, condition monitoring systems means that such solutions are generally unattractive for wind farm owner/operators. These issues motivate the need for developing smart predictive maintenance tools that offer more than standard SCADA systems. As in many other sectors, data analytics approaches like machine learning have become an essential part of the wind farm O&M provider’s toolset, as they yield models with the ability to make decisions on the operational strategy of instrumented products “in time”. Nevertheless, there are limitations to a purely data-driven approach. Foremost is the fact that such models are trained using historical data. Models are unable to accurately identify anomalous events that they have not encountered. Attempting “what-if” scenarios, where hypothetical situations are tested becomes difficult in such a context. Physics-based models, in contrast, attempt to represent faulty behaviour starting from first principles and thus can capture any faults representable by physics. This justifies the introduction of virtual, physics-based models in predictive-maintenance applications. System simulation offers a flexible framework for realising model-based diagnostics tools as they allow the representation of multi-domain physical systems (mechanical, thermal, hydraulic, electrical etc) in a standardised way. Fault modelling for these different domains can likewise be realised in a systematic way. In this work, we demonstrate how a holistic approach combining data-driven and physics-based models can be deployed to realise an application that provides in-time estimate of wind turbine asset health and operational performance.
Digital Twin, System Simulation, Data Analytics, Wind Energy
13:45
conference time (CEST, Berlin)
Cluster-based Metamodeling
28/10/2021 13:45 conference time (CEST, Berlin)
Room: F
D. Steffes-lai, R. Iza-Teran, T.N. Klein (Fraunhofer SCAI, DEU); J. Garcke (Fraunhofer SCAI and Universität Bonn, DEU)
D. Steffes-lai, R. Iza-Teran, T.N. Klein (Fraunhofer SCAI, DEU); J. Garcke (Fraunhofer SCAI and Universität Bonn, DEU)
Modern product development in engineering is highly driven by computational approaches using finite element simulations and data analysis. Nevertheless, simulations in automotive applications are still very time and resource consuming. This results in a high need for accurate surrogate models that are cheap to evaluate, can be analysed easily with machine learning, and can be efficiently used, for example, in optimization studies. However, changes in input parameters can result in bifurcations in the simulation results, especially in highly nonlinear applications such as crash simulation. Often, these bifurcations cannot be represented adequately by a single surrogate model. To improve in such a case the prediction quality of the surrogate model, a preceding clustering of the simulations is very promising. We propose a workflow in which one first clusters the simulations obtained for different input parameter combinations according to the similarity of the results and secondly selectively uses these clusters to obtain separate surrogate models. The approach identifies global similarities using dimensionality reduction, where then simulations forming a cluster are automatically identified. Since simulations inside a similarity based cluster vary in a much more uniform way as outside the clusters, constructing surrogate models using clusters is much more accurate. In our study, design of experiments is employed to generate simulation snapshots. These are used for constructing two types of surrogate models, a standard proper orthogonal decomposition (POD) and our approach. The POD is used as a baseline for comparison and the same set of simulation snapshots is used for the POD and for our cluster-based approach. In particular, we employ a metamodeling approach using a kernel method, namely radial basis functions (RBF), to improve the prediction quality especially for highly nonlinear crash simulation results. Both types of surrogate model approaches are applied without the preceding clustering which results in rather poor prediction accuracy for specific parameter combinations, and next with the preceding clustering leading to large improvements in the prediction results. The approach is demonstrated for several CAE applications in metal forming and for crash simulation. For the studied methods the constructed surrogate models are evaluated using different type of metrics for comparison. The results clearly demonstrate the advantages of the proposed methodology especially in the presence of bifurcations.
surrogate models, clustering, machine learning, prediction, crash simulation
14:05
conference time (CEST, Berlin)
A Massively Parallel Fast Multipole Library for Industrial Applications
28/10/2021 14:05 conference time (CEST, Berlin)
Room: F
J. Zechner, L. Kielhorn, T. Rüberg (Tailsit GmbH, AUT)
J. Zechner, L. Kielhorn, T. Rüberg (Tailsit GmbH, AUT)
In many engineering disciplines the use of Boundary Element Methods (BEM, or Method of Moments) poses an advantage because a volume mesh is avoided for at least some parts of the domain. For example, in problems of acoustics, electromagnetics or scattering, it can often be very cumbersome to discretise the air region. Moreover, solid parts might move with respect to each other such that tedious remeshing is required. This has led us in the past to combine the BEM with Finite Elements (FEM) to exploit the respective advantages of each method. The FEM-BEM coupling approach has been successfully applied to problems from electromagnetics. The analysis of actuators, such as magnetic valves, or the snapping of permanent magnets are examples where avoiding a volume mesh for the air region is particularly advantageous. Unfortunately, the standard BEM has an inherent quadratic complexity similar to N-body problems: each degree of freedom directly interacts with each other. Therefore, its applicability in nowadays industrial standards is only feasible with acceleration methods which reduce the numerical effort. To this end, we have developed the software library PARTS, which is based on the Fast Multipole Method (FMM). The FMM is based on a hierarchical decomposition of the computational domain and an approximation scheme for distant interactions between clusters of degrees of freedom. A careful implementation of this method enables us to tackle BEM-based systems with linear complexity. Crucial design features of our FMM library are reusability, applicability to a wide range of problems, multi-threading and distributed memory parallelism (MPI) for large-scale applications. In this work, we present the latest results from our FMM library. For specific use cases the scaling (speedup) in a HPC environment up to ~1000 cores is shown. Several practical examples in conjunction with LS-DYNA demonstrate the soundness of the BEM-based approach and show the wide applicability of the method.
High Performance Computing, Fast Multipole Method, Electromagnetism, Acoustics
×

[TITLE]

[LISTING

[ABSTRACT]

[DATE]

[ROOM]

[KEYWORDS]