L13
Uncertainty Quantification

Back to overview

15:30
conference time (CEST, Berlin)
Machine Learning Approach to Model Correlation for Vehicle Model Mobility Characteristics
27/10/2021 15:30 conference time (CEST, Berlin)
Room: L
G. Jones (SmartUQ, USA); H. Kolera, V. Jeganathan, E. Pesheck (Hexagon Manufacturing Intelligience, USA)
G. Jones (SmartUQ, USA); H. Kolera, V. Jeganathan, E. Pesheck (Hexagon Manufacturing Intelligience, USA)
The value of a simulation tool is tied to the level of engineering insight it provides. While the modeling approach's validity, the model inputs, and simulation processes are critical considerations, engineering insight fundamentally depends on the quality and volume of simulation data produced. These large-scale datasets are frequently utilized to inform the engineering process and improve the simulation model's predictive nature by improving its correlation with measured information. With the advent of high-performance computing engineering simulations can often be deployed at scale; however, particularly for large problems, for example those with many inputs, a direct simulation approach can be inefficient for many data analytics task such as sensitivity analysis, optimization under uncertainty, and intelligently improving the simulation model accuracy. In such cases, there is often benefit in employing a Machine Learning (ML) based Uncertainty Quantification (UQ) approach, training emulators (aka predictive models) on high fidelity simulation data to do the heavy lifting of the analytics tasks. There are many factors that determine simulation model accuracy. These factors can be condensed into noise, bias, parameter uncertainty, and model form uncertainty. In order to counter these effects and ensure that models faithfully match reality to the extent required, simulation models must be adjusted (calibrated) to agree with physical data. Often calibration is performed in a manual process guided by the simulation analyst’s experience and intuition. More advanced calibration processes take a data matching approach that assumes uncertainty in the parameters to be calibrated is responsible for all disagreements between the model’s outputs and the physical data outputs. The method then seeks to adjust the parameters until an error metric such as Root-Mean Squared Error (RMSE) is minimized. These methods only address parameter uncertainty and can have success in calibrating simulation models that are already fairly accurate; however, they are not designed to handle models with large amounts of uncertainty from other sources, e.g. model form uncertainties. Statistical calibration provides an alternative that both finds the optimal parameter values (similar to the data matching problem) and produces a separate predictive model to estimate any remaining discrepancy for instance due to simulation model form error. This discrepancy model can be useful for simulation model diagnosis and validation. Further, the combination of a prediction from the properly calibrated simulation and the discrepancy model prediction (as a correction factor), can allow for accurate prediction of the real-world case even for significantly inaccurate simulations. An application that lends itself well to this capability is virtual exploration and calibration of vehicle dynamics models. These models are truly multi-dimensional with hundreds of input parameters and responses. The values of these parameters also have a degree of uncertainty attached to them. The typical approach is to create a nominal model and then tune/calibrate the parameters with uncertainty manually. This approach is expensive both in terms of time and any biases that the analyst could introduce. In this paper, the alternative approach of statistical calibration is presented and implemented on a military vehicle Multi-Body Dynamics model and results dataset associated with ongoing Next Generation NATO Reference Mobility Model (NG-NRMM) development activities.
Vehcile Dynamics, Multibody Dynamics, Statistics, Model Calibration, Data Analytics, Machine Learning, Uncertainty Quantification
15:50
conference time (CEST, Berlin)
Response Surface Model Development for Uncertainty Quantification and Model Verification and Validation Frameworks
27/10/2021 15:50 conference time (CEST, Berlin)
Room: L
D. Riha, E. Decarlo, M. Kirby (Southwest Research Institute, USA)
D. Riha, E. Decarlo, M. Kirby (Southwest Research Institute, USA)
Despite the tremendous advancements in computational resources and techniques over the last several decades, the computational cost of using high-fidelity physics-based simulations remains prohibitive to broad-scale adoption and application of probabilistic design and analysis methods in industry applications. Response surface models (also known as surrogate models) make it feasible to perform decision-critical probabilistic analysis and sensitivity studies that would otherwise be too costly. Effective response surfaces must efficiently represent the relationships between the inputs and output of a computationally intensive simulation and often must be based on a very limited set of training points. Since a response surface model is trained on a limited number of data points, it is important to be able to account for this inherent uncertainty in rigorous uncertainty quantification (UQ) analyses. Of the many response surface approaches that exist (e.g., linear and nonlinear regression, neural networks), Gaussian process (GP) regression has proved to be a valuable methodology as they are nonparametric and provide the necessary flexibility to model a wide range of complex nonlinear relationships present in engineering applications. In addition, once trained, GP models capture both the prediction mean and prediction uncertainty. Not only applicable as surrogate models for noise-free physics-based simulations, GP regression models can accommodate noisy training data from experiments and also account for it in prediction. These characteristics of GP models make them ideal for integration in probabilistic, UQ, and verification and validation (V&V) frameworks. To facilitate the creation and verification of both parametric response surface models and GP regression models, the NESSUS® Response Surface Toolkit (RST) was developed. NESSUS RST also has built-in capabilities to create a space-filling design of experiments using Latin-hypercube sampling, provides both quantitative and visual goodness-of-fit assessments, and offers variance-based sensitivity indices of the response surface to guide V&V and UQ activities. This paper will explore the creation, evaluation, and usage of response surface models in engineering examples.
response surface models, surrogate models, gaussian process models, uncertainty quantification, verification and validation, probabilistic analysis
16:10
conference time (CEST, Berlin)
Comparative Uncertainty Quantification of Simplified Structural Dynamic Models for Performance Quality Prediction
27/10/2021 16:10 conference time (CEST, Berlin)
Room: L
L. Harris (Aerospace Systems Design Laboratory, USA); A. Cox, D. Mavris (Georgia Institute of Technology, USA)
L. Harris (Aerospace Systems Design Laboratory, USA); A. Cox, D. Mavris (Georgia Institute of Technology, USA)
The development of a novel launch vehicle is a long and costly process. The complexity of incorporating flexible-body dynamics in early design leads to frequent use of a rigid-body assumption until design decisions fix sectional mass properties. This assumption is also necessary to reduce dimensionality and complexity of modeling for benchmarking excitation frequencies and deflections. However, there are significant and difficult to quantify implications of rigid-body simplifications and mass property generalizations. Inadequate understanding of dynamic variability has been traced to costly vehicle redesigns and operating environment changes due in part to the coupling with guidance and navigation. One way to integrate flexible-body dynamics earlier is through reduced order modeling (ROM) methods, by representing the vehicle as a simpler system. However, without proper uncertainty characterization between standard rigid and flexible dynamic analysis techniques ROM can exacerbate these due to sparsity of mass property data. Commonly, performance variability is assessed using a combination of low-fidelity modeling, regression analysis, and probabilistic theory. This research aims to use this approach to identify variability and quantify uncertainty of rigid and flexible-body simplifications. Such a method enables vehicle designers to better understand the modeling implications introduced by varying the mass property distribution in flexible-body dynamics during early phases of design. To characterize the quality of performance outputs, this method compares the variation in dynamic performance prediction from rigid and reduced flexible body representations to a baseline full-scale flexible-body analysis through alternatives of lumped mass properties. Using a launch vehicle design based on publicly available data as a baseline, this research applies regression and probability theory to predict and characterize variability of dynamic models to inform a threshold for which the mass distribution can be generalized before performance variation is no longer acceptable. Model quality is ensured by this analysis through predictive error quantification as well as sensitivity to mass distribution. This method provides a quantitative uncertainty basis for decision making when adequate mass property calibration data is not accessible to anchor dynamic models. Development of design margins can thus be supported for highly variable systems with prediction confidence for prospective models.
Uncertainty Quantification, Dynamic Modeling, Rigid Body Analysis, Flexible Body Analysis, Launch Vehicle Dynamic Modeling, Probabilistic Analysis, Regression Analysis
16:30
conference time (CEST, Berlin)
Northrop Grumman Mk44 Chain Gun Optimization Using Predictive Analytics and Multibody Dynamics
27/10/2021 16:30 conference time (CEST, Berlin)
Room: L
B. Thornton (MSC Software, USA); J. Behren (Northrop Grumman, USA); G. Jones (SmartUQ, USA)
B. Thornton (MSC Software, USA); J. Behren (Northrop Grumman, USA); G. Jones (SmartUQ, USA)
The Mk44 chain gun utilizes a custom-shaped spring, known as the Rounds Positioner Spring (RPS), to quickly translate rounds from its dual feed paths onto the bolt face. This component of the feed system is subject to two primary modes of failure: feed jam and spring fatigue. Both failures are heavily influenced by the spring’s shape. Optimization of the spring geometry is challenging because the system response is highly nonlinear and sensitive to the numerous parameters needed to describe the irregular spring geometry. Northrop Grumman has historically engineered system improvements using a traditional simulation-based trial-and-error approach. In this approach, engineers combine their judgment and experience with simulation results to iterate on potential design improvements. Despite this manual iteration approach’s tangible benefits, it is unlikely to achieve a true global optimization when applied to a system with multiple design parameters, competing constraints, and objectives. It is simply too complex for engineers to efficiently assimilate the nuanced relationships between the numerous variables for such systems. This presentation will discuss Northrop Grumman’s shift toward a more systematic approach to optimizing the Mk44’s RPS. In this approach, the engineering team fully automated the process of building models of the Mk44 feeder assembly in MSC Adams. They then used SmartUQ’s design of experiments (DOE) tools to prescribe the simulation runs needed for training an emulator (aka predictive model) of the physics-based Adams simulation. The emulator is shown to effectively predict system behavior for eight input variables and two critical analysis scenarios. Finally, the engineering team used the emulator in a nonlinear optimization algorithm to determine a spring shape optimized to reduce spring stress and propensity for feeder jams for multiple boundary conditions. The optimal design was then modeled in MSC Adams for validation and additional analysis. The emulator was also used in SmartUQ for further Uncertainty Quantification (UQ) analysis including sensitivity analysis and propagation of input uncertainties.
Multibody Dynamics, Shape & Size Optimization, Automation, Results Visualization, Predictive Analytics, Uncertainty Quantification
×

[TITLE]

[LISTING

[ABSTRACT]

[DATE]

[ROOM]

[KEYWORDS]