Testing of models is essential to successful development of Modelica libraries is a hypothesis we live by at Modelon and established in a previous blog post. This blog post aims to prove this hypothesis by reviewing one of the most demanding model testing use cases: our own.
Testing is tedious but necessary is one of the quotes that I live with daily. Developers seldom like working with test, whether creating, maintaining or running them. That is also true at Modelon, where the library developers face a big challenge keeping all 14 Modelica libraries state-of-the-art, while making sure they are working properly on all supported platforms, such as Dymola and OPTIMICA Compiler Toolkit-based technologies like ANSYS Simplorer.
We have identified two major components to solving this problem:
We have found the answer to these two challenges by leveraging the versatile and free continuous integration platform Jenkins (covered only briefly in this blog), and our new Model Testing Toolkit, to be released on March 29th this year.
The creation of a test is often a tedious task itself, so we have implemented a shortcut in the Model Testing Toolkit: the test_converter script. This script allows for automatic creation of tests from Modelica Code, for example to create tests for all models with the Experiment annotation.
With the generated tests as a starting skeleton, the developer can fill in the blanks and have a test suite up and running much faster than if the tests were to be created from scratch.
Editing the tests and running them locally is done through the GUI included in the Model Testing Toolkit. The GUI allows for the variable structure of the models to be parsed, to extract variables for running regressions. It further allows modifiers to be set, as visible in Figure 1.
Jenkins is an open source automation server that can be installed and run on any machine. At Modelon we have a master server managing a set of slave machines that execute the actual tests. To facilitate this setup, we rely on a set of utilities in the Model Testing Toolkit
These utilities and their detailed documentation allow for the automation we need at Modelon to work efficiently with library (and tool) development:
The Model Testing Toolkit can output both HTML reports (a small example is shown in Figure 2) for humans to read, as well as a Junit xml report for Jenkins to parse and determine the success rate of the test. The HTML report also links to outputs and logs from the respective tools to help understanding a test failure.
As a Modelica developer you can envision similar usages of this Model Testing Toolkit for your applications, such as:
To make sure that a model is distributable in your organization, the compiled FMU needs to be tested on all required platforms. Creation and running of such cross-platform tests for FMI supporting tools become very convenient.
The scripted framework of the Model Testing Toolkit allows for easy connections to your in-house FMI tools using Python.
Let’s shortly summarize how the Modelon Testing Toolkit enables robust testing of Modelica-based enterprise platforms:
Since the Model Testing Toolkit has been invaluable to us for development of Modelica models, we will soon offer it as a commercial product for all Modelica developers and users.
The release is scheduled for the 29th of March 2017.
The Model Testing Toolkit can save you a lot of time, we know it did for us, so get in touch and we can start finding out how!
Johan Ylikiiskilä is Product Owner of the Model Testing Toolkit. He is a modeling and simulation engineer, as well as a numerical analyst, with focus on the interaction between models and numerical integration algorithms. Johan holds an MSc in Engineering Physics from Lund University.