During my five years at Modelon I have been involved in many kinds of modeling projects using Modelica and FMI: developing specific components and systems, pure library development, system model development and tool cross testing.

One common thread in all these projects is that errors and bugs have unintentionally been introduced in both models and tools by me and others. A key factor for success in all my projects has been a trusted regression testing framework that detects the errors introduced as quickly as possible before they can cause real trouble.

This article introduces the Model Testing Toolkit, which was previously known also under the name OPTIMICA Testing Toolkit.

Regression testing of software is recognized as industry best practice to meet high quality standards and reliability. My experience, and that of my colleagues, shows that model development is no exception. No matter how the model development process looks, regression testing allows every modification to the code to be trusted, thus reducing risks and increasing development efficiency. In addition when your Modelica model is required to run on multiple tools, cross testing becomes a necessity for your evolving code to work with all tools.

The form of the regression testing has varied greatly between projects since the testing criteria as well as the reason for testing differ from project to project. You can test for robustness of a library in one project while testing if a model still meets the requirements in another project. I find it useful to talk about three distinct testing use cases:

  • Regression testing of a component library: When working with library development, it is important that component development only has the expected effects – and that new developments do not alter the desired component behavior.By having a complete component test suite that is run either on a daily basis or on each commit*, you can easily detect undesirable changes in behavior by comparing a selected set of signals in each test model against a ‘golden’ result or reference. An example can be seen in Figure 1.

Figure 1: The simulation result (green curve) is compared to a reference trajectory (yellow curve) resulting in a verification failure due to the result leaving the ‘acceptance tube’ (cyan and turqouise lines) after 0.530 seconds. This particular image is an output from an open source trajectory comparison tool csv-compare made available by the Modelica Association [2]

  • Regression testing of system and sub-system models: Regression testing of systems is similar to that of components in that you want to track changes in behavior. However, there are some key differences:

– Parameterization becomes a possible source for changes in behavior.

– The component modeler is not necessarily the system expert, meaning that the modeler and test engineer may need to be two different experts.

– The success criteria may differ from comparing against a reference by instead testing against requirements, for example verifying that a set of variables are within some required bounds.

  • Regression cross testing: Testing how different components in a workflow work together becomes more complex when you increase the number of components. Your model may depend on both your own libraries as well as commercial libraries, and different tools may be used in different parts of the workflow. Testing how all libraries and tools perform together requires a test to be run when an individual component changes to tie an error to the actual cause e.g. a change in version of a tool or commercial library or a commit to your own library or models.

To meet these use cases, I believe there are a set of requirements that a testing framework for model development must fulfill:

  1. Interoperability: As mentioned in a previous blog post, models or FMUs are often used to ‘connect’ different tools. This requires the FMUs to simulate on multiple tools, once again stressing that cross testing is key.
  2. Flexibility: The three use cases each poses different requirements on the testing framework where both compiler, simulation environment as well as libraries may change. So for a framework to truly manage all use cases, it must be flexible in how to compile, simulate and verify a test. The changing of tools and methods for compilation and simulation needs to be handled seamlessly, as well as changing the method or metric used for verification and reporting.
  3. Automation: To be able to work with model development and testing, testing should be an integrated and automated part of the development process. One solution is for the tests to run automatically on a server, preferably on a continuous integration platform like Jenkins. To make this process as lean as possible, the framework needs to be able to identify the subset of tests associated with a component, while at the same time running all tests if the tool version or some simulation or compilation options changes.
  4. Efficient test authoring: In addition to running tests automatically, it is also important to be able to run tests locally, for example before a commit. This need leads to the requirement of a GUI where tests and test suites can easily be created, managed, and run locally as well as analyzed. An example of an interesting analysis is the test coverage of a given test suite. Furthermore, there must be a way for a system expert to create and edit the test specification without having to dig into the Modelica code. Together with the previous interoperability and flexibility requirements, this requirement implies that the GUI should be a separate application and not necessarily a Modelica editor.
t is with these test use cases and requirements in mind that we at Modelon are developing a testing framework, the Model Testing Toolkit (previously called OPTIMICA Testing Toolkit). It allows for easy and efficient Modelica and FMI cross tool testing where you can easily compile a model in Dymola to have it simulate in for example the FMI Toolbox for MATLAB/ Simulink.

t is with these test use cases and requirements in mind that we at Modelon are developing a testing framework, the Model Testing Toolkit (previously called OPTIMICA Testing Toolkit). It allows for easy and efficient Modelica and FMI cross tool testing where you can easily compile a model in Dymola to have it simulated in for example the FMI Toolbox for MATLAB/ Simulink.

The Model Testing Toolkit also includes a GUI to efficiently author the test suites and run them locally for result auditing, a screenshot can be seen in Figure 2.

If you already have test suites in place, conversion scripts from the most common Modelica test specifications like the experiment annotations are also included. There are also utilities in place for integration with Jenkins to help automate the cross testing. All to ensure that your model portfolio maintains its integrity over time and integrates seamlessly with different Modelica platforms.

What are your experiences from model regression testing? Would a product like Model Testing Toolkit help you? Get in touch!

References:

  1. Tilly, A., Johnsson, V., Sten, J., Perlman, A., Åkesson, J., OPTIMICA Testing Toolkit: a Tool-Agnostic Testing Framework for Modelica Models, Modelica Conference 2015, Paris, 21-23 September 2015, pp.687-693.
  2. https://github.com/modelica-tools/csv-compare – accessed 11 January 2016.