C5
HPC 1

Back to overview

08:35
conference time (CEST, Berlin)
Mechanical FEA Simulations on a 2000 Core Cluster
26/10/2021 08:35 conference time (CEST, Berlin)
Room: C
H. Guettler (MicroConsult Engineering GmbH, DEU); J. Beisheim (Ansys Inc., USA)
H. Guettler (MicroConsult Engineering GmbH, DEU); J. Beisheim (Ansys Inc., USA)
The solder joints that connect the integrated circuit (IC) to the printed circuit board (PCB) are subject to failure due to mechanical stresses during thermal cycling caused primarily by thermal mismatch. When we started doing this class of simulations at MicroConsult, a typical simulation would take weeks to conclude. Over the years, with the help of software enhancements and new hardware capabilities, these runtimes could be reduced down to a few hours. All improvements combined, we could achieve a speedup of more than 300 running exactly the same simulation models from just a decade ago. We will report on results from actual real world simulations that have been performed on MicroConsult’s HPC cluster. That includes a comparison of hardware improvements for Intel Xeon CPUs going back to the Broadwell generation from 2015 up to the Ice Lake generation from 2021. In addition, we will show the complementing improvements implemented in the Ansys Mechanical code. The key to achieving optimal performance is to do better than just use more cores and hope for the best, but to set up a balanced system by utilizing modern CPUs with advanced instruction sets (AVX 512), enough RAM, a fast mass storage system and the best performing interconnect. Beyond using optimal hardware, it is imperative that a user has some understanding of the scaling limitations of their FEA software. For Ansys Mechanical, the largest accelerations are typically achieved running models consisting only of solids. Adding contacts generally degrades performance and scaling. However, by utilizing features such as small sliding contact and contact splitting, that became available during the latest release cycles, those limitations could be largely overcome, especially at high core counts. Our recommendation is to use HPC for your daily simulations: it is not only an enabler for solving large finite element models but will also dramatically improve productivity. Instead of waiting overnight to get to the critical point, HPC can help do the same amount of work over an extended coffee break. Background: MicroConsult is a high performance computing partner to Ansys and works closely together with the Ansys Mechanical solver team in Canonsburg.
High Performance Computing, Mechanical Simulation, Performance
08:55
conference time (CEST, Berlin)
New Performance Dimension for Extremely Large CFD Simulations: Reduce Computing Times and Increase Result Quality With the Right Workflow
26/10/2021 08:55 conference time (CEST, Berlin)
Room: C
C. Woll (GNS Systems GmbH, DEU)
C. Woll (GNS Systems GmbH, DEU)
CFD simulations are complex and thus demanding in practice: engineers often aim for high resolution while reducing computing time. Integrating CFD into computer-aided product development in the cloud meets these requirements and allows handling extremely large simulations. The challenge, however, is to reliably map the complete workflow of the analysis engineer in the cloud and to fully automate it. The fully automated workflow for an optimal aerodynamics simulation designed by GNS Systems increases the quality of the simulation results through a high scalability of the computing capacities in the cloud. In addition, improved job management reduces the complexity of computationally intensive CFD simulations for analysis engineers. They benefit from a tool that has already proven itself successfully in customer use and efficiently supports the coordinated processing of recurring process tasks, even in heterogeneous IT environments. The conception of the fully automated workflow in the cloud based on an aircraft model made of coloured plastic clamping bricks also includes practical tips for its integration into the often highly complex IT environments for CFD simulations. Based on realistic results from various experiments with procedures and methods of pre- and post-processing on lift, drag and rotational speed, a continuous workflow for computationally intensive simulations was created. The simulation of the complex model down to the smallest detail also enables detailed turbulence calculations, very small time steps and high spatial resolution. Behind the automated workflow is the use of a selected CFD toolset in the cloud, which is already being used successfully in practice in this way. For example, a tool for simplified job submission combines existing workflows with standardized containers, virtual machines, job scheduling and cluster management. The improved job management significantly reduces simulation job response time by determining resource requirements in the cloud based on the number and size of jobs in the waiting queue. Unnecessary nodes were shut down to optimize costs. In combination with other toolboxes, it succeeds in mapping the workflow of an optimal aerodynamics simulation in the cloud with just a few steps and increases productivity in simulation processes by minimizing the effort for job submission. Even for compute-intensive CFD simulations which use a very large number of cores, the automated workflow delivers reliable, error-free and high-quality results.
CFD, Simulation, OpenFOAM, Cloud, Computation times, Workflow, Scalability, Automation, Use Case, Container, CycleCloud, Azure, CAE Engineering, Extremely large Simulations, Automated Workflow, Simplify Job Submission
09:15
conference time (CEST, Berlin)
Airplane Simulations on Heterogeneous Pre-Exascale Architectures
26/10/2021 09:15 conference time (CEST, Berlin)
Room: C
R. Borrell, O. Lehmkuhl, D. Mira, G. Houzeaux (Barcelona Supercomputing Center, ESP); R. Taghavi (SIW, ESP); G. Oyarzun (Barcelona Supercomputing Center, CHL)
R. Borrell, O. Lehmkuhl, D. Mira, G. Houzeaux (Barcelona Supercomputing Center, ESP); R. Taghavi (SIW, ESP); G. Oyarzun (Barcelona Supercomputing Center, CHL)
High fidelity Computational Fluid Dynamics simulations are generally associated with large computing requirements, which are progressively acute with each new generation of supercomputers. However, significant research efforts are required to unlock the computing power of leading-edge systems based on increasingly complex architectures. We can affirm with quite a certainty that future Exascale systems will be heterogeneous, including accelerators such as GPUs. We can also expect higher variability on the performance of the various computing devices engaged in a simulation; due to the explosion of the parallelism, and technical issues such as the hardware-enforced mechanisms to preserve the thermal design limits. In this context, dynamic load balancing (DLB) becomes a must for the parallel efficiency of any simulation code. In the Center of Excellence for engineering EXCELLERAT, the CFD code Alya has been provisioned with a distributed memory DLB mechanism, complementary to the node-level load balancing mechanisms strategy already in place. The kernel parts of the method are an efficient in-house SFC-based mesh practitioner, and an online redistribution module to migrate the simulation between two different partitions. Those are used to correct the partition according to runtime measurements. We have focused on maximizing the parallel performance of the mesh partition process to minimize the overhead of the load balancing. Our software can partition a 250M elements mesh for an Airplane simulation with 0.08 sec using 128 nodes (6144 CPU-cores) of the MareNostrum sumpercomputer. We then applied all this technology to perform simulations on the heterogeneous POWER9 cluster installed at the Barcelona Supercomputing Center, with an architecture very similar to that of the Summit supercomputer from the Oak Ridge National Laboratory – ranked second in the top500 list. In the BSC POWER9 cluster, which has 4 NVIDIA P100 GPUS per node, we have assessed the performance of Alya using up to 40 nodes for simulations of airplane aerodynamics.We demonstrated that we could perform a well-balanced co-execution using both the CPUs and GPUs simultaneously, being that 23% faster than using only the GPUs. In practice, this represents a performance boost equivalent to attaching an additional GPU per node.
aerodynamics, High Performance Computing, CFD, Exascale
09:35
conference time (CEST, Berlin)
Paving the Way for the Development to Exascale Multiphysics Simulations
26/10/2021 09:35 conference time (CEST, Berlin)
Room: C
A. Dessoky (Höchstleistungsrechenzentrum Stuttgart (HLSR), (DEU); R. Schneider, HLRS, (DEU)
A. Dessoky (Höchstleistungsrechenzentrum Stuttgart (HLSR), (DEU); R. Schneider, HLRS, (DEU)
The next stage in high-performance computing is called exascale; this means that the coming generation of computers will be able to perform 1018 floating point operations per second! To make the simulation programmes that will run on these computers fit for these architectures, the EU is funding the Center of Excellence EXCELLERAT, in particular the coordinator, the High Performance Computing Centre Stuttgart (HLRS). In this project, six computational codes (Nek5000, Alya, AVBP, TPLS, FEniCS, Flucs) will be optimised and further developed to create new simulation tools for the engineering community at Exascale level. These programmes are particularly targeted at the aerospace, automotive, energy and manufacturing sectors, but the results of the project will also benefit other sectors during its lifetime. The developments will be implemented using concrete use cases from industry and will form the basis for the Centre of Excellence's offer to help developers of other simulation programmes to make them fit for the Exascale level as well. As the field of engineering is one of the industrial sectors with the highest exascale potential, EXCELLERAT bundles the existing European expertise for the establishment of a centre of excellence for highly scalable engineering applications. Through this, services such as networking, training, access to knowledge to technical services such as co-design, HPC, data management and scalability extension will be offered in the future. The talk will briefly introduce the EXCELLERAT project and then address the challenges faced by both developers and users of exascale applications.Example of use cases and their innovate solutions from automotive, aerospace, energy and manufacturing industrial sectors will be presented to show the future multiphysics simulations. Dialogue with experts and potential users will also be sought in order to assess their needs and to be able to include the required competences in the further course of the project.
Multiphysics Simulations at exascale level
×

[TITLE]

[LISTING

[ABSTRACT]

[DATE]

[ROOM]

[KEYWORDS]