10:40

conference time (CEST, Berlin)

conference time (CEST, Berlin)

Standardised Benchmarks for Increasingly Complex Computational Frameworks

26/10/2021 10:40 conference time (CEST, Berlin)

Room: G

N. Wilke (Pretoria University, ZAF)

Increases in the complexity of mathematical models and computing devices demand benchmarks that verify the accuracy and correctness of solutions. The quality of a simulation depends on a chain of events that include i) the mathematical model's suitability, ii) quality of the realisation of the computational model, iii) reliability of the computing device, and iv) the practitioner's application thereof. As the quality of a simulation depends on executing each aspect to perfection, the role of benchmarking in ensuring the integrity of these steps cannot be understated. NAFEMS plays a significant role in standardising benchmarking, as exemplified with the recent publication of the Computational Fluid Dynamics Benchmarks - Volume 1 in 2017.
Unfortunately, benchmarks for computational granular dynamics have been limited to individual academic studies for particle systems with limited particles, idealised particles shapes and interactions. Recent increases in the complexity of particle systems that include various shape representations, path-dependent contact laws and coupled multi-physics interactions are demanding standardised benchmarks that deal with these complexities.
Designing appropriate standardised benchmarks is challenging, and careful consideration is required to avoid misinterpretation and false confidence when passed. Ideally, each benchmark is designed to target specific aspects of a computational framework with purpose, meaning and clarity. Benchmarks need to complement each other to explore with diversity targeted aspects of a computational framework. A benchmark set needs to clearly define the implications when it is passed and explicitly state which parts of a computational framework are excluded or insufficiently assessed by the set. The latter would guide the development of complementary and diverse benchmark sets instead of duplicating the assessment goals of existing benchmark sets.
However, having benchmark ideals is easily stated. Still, it is challenging to realise: Which results would serve as a comparison? Which numbers of these results are the most informative towards the benchmark's aims? How many numbers are sufficient for the comparison? These requirements are further complicated when assessing the energy and time efficiency during benchmarking. The former assessment of energy required per solution plays an increasingly important role as simulation codes are becoming more accessible and utilised globally.
This paper explores potential approaches and considerations that may spur on and encourage the development of standardised computational granular dynamic benchmarks that embrace the complexities that modern computing has normalised.

benchmarks, verification, validation, computational granular dynamics, discrete element method

11:00

conference time (CEST, Berlin)

conference time (CEST, Berlin)

Quantifying the Value of Modeling, Analysis and Simulation

26/10/2021 11:00 conference time (CEST, Berlin)

Room: G

G. Thomas (Open iT Norge AS, NOR )

Have you ever wondered how to view the usage of all of the different toolboxes from MathWorks in your organization? As technology becomes more complicated and vendors expand their offerings, there continues to be more overlap between capabilities of different vendors. MathWorks now has over 100 toolboxes, making it difficult to determine which ones are truly needed in an organization.
Thus, a natural consequence of these three trends:
1) Companies rely more on software than ever before
2) Software surrounds us in our everyday life
3) The tools used by engineers and scientists are evolving faster than ever
Does a company need to invest in more expensive software tools to accelerate the pace of ideas becoming a reality? This is sometimes difficult in organizations to quantify the value of such investments. Since MathWorks products depend on MATLAB for proper function, it is imperative to ensure there are enough MATLAB licenses for all users. Tracking MATLAB license usage from the log files can be tedious and error-prone, leading to incorrect assumptions.
In this talk, you will see how a technical and scientific computing consulting company quantifies the value of using MathWorks tools in their business model and internal team training. On this session, Gareth Thomas will expound and share his experiences in effectively tracking the usage of MATLAB and its different Toolboxes and using that to drive internal discussions of the value that software usage metering brings to the company.
He will share his insights on how to:
• Automatically charge based on the number of hours in using MATLAB
• Change the internal behavior and training of technical consultants
• Modify business model based on customer’s usage data
• Common challenges and how to solve them
• The fundamental steps in optimizing licenses that could help drive internal discussions; and
• The value of software usage metering to an organization.

engineering software management, MathWorks, toolboxes, MATLAB, software usage metering, license optimization, track software usage

11:20

conference time (CEST, Berlin)

conference time (CEST, Berlin)

Ensuring Quality and Accuracy: How Do We Test Algorithms and Solvers?

26/10/2021 11:20 conference time (CEST, Berlin)

Room: G

C. Hickey, R. Kannan (Arup, GBR)

In the interest of comparing and analysing Microsoft Windows-based eigensolvers for structural Finite Element problems in civil engineering, we construct a series of checks that can be used for sparse symmetric generalised eigenvalue problems. These are based on algorithmic error analysis and allow us to connect eigenvalue theory to a user-prescribed allowable error in eigenvalues and eigenvectors as a function of the unit roundoff in finite precision arithmetic. While the checks are themselves not new, i.e. they are based on existing literature, to our knowledge there is no exists no definitive list of comprehensive minimum criteria that an generalized eigenvalue solver must satisfy.
Our test framework consists of three categories of tests. The first of these tests for correctness as defined by a bounded forward error, obtained (where feasible) by comparison with a dense "direct" eigensolver. By direct we imply an eigensolver that computes all eigenpairs of a given matrix, usually by diagonalising the matrix pencil using a variant of the QR-algorithm family.
The second category tests numerical robustness. Here we use three checks, the first of which is based on the backward error of the computed eigenpair. We compute the backward error and compare it against the unit roundoff of the underlying floating point type. This allows us to ensure that our eigensolver is backward stable and bounds our backward error to a modest multiple of the roundoff error. The second test looks for eigenvector orthogonality since the eigenvectors of a generalized eigenvalue problem are either mass or stiffness orthogonal. Projection-based eigenvalue solvers (such as those used for sparse eigenvalue problems) suffer from a loss of orthogonality of eigenvectors, and the resultant eigenvectors can deviate from being stiffness- or mass-orthogonal. This test allows us to measure the loss of said orthogonality as a function of machine precision. The final test is an eigenvalue counting method to check that all eigenvalues within a given interval have been retrieved.
The final category compares runtime performance to ensure there are no performance regressions for the end-user.
We use this test framework in combination with a suite of real-world problems to compare a variety of sparse eigensolvers that work on Windows, written in C++. Our suite consists of FEM problems with a wide spectrum of different properties that can be used with this testing framework or others. It is executed in an automated "continuous integration" pipeline every time there is a change to the source code of the software.
We argue that this is the minimal set of questions that users must ask of their eigensolvers, and in conclusion we present the resultant answers representing the state of the art for sparse windows eigensolvers in 2020. Our investigations show that there are very few solvers that can be embedded into modelling programs off the shelf, and they also uncover a variety of issues in several well-known libraries.

Numerical linear algebra, algorithmic error analysis, modal analysis, eigenvalues, solvers