[Translate to Chinese:] During my five years at Modelon I have been involved in many kinds of modeling projects using Modelica and FMI: developing specific components and systems, pure library development, system model development and tool cross testing.
One common thread in all these projects is that errors and bugs have unintentionally been introduced in both models and tools by me and others. A key factor for success in all my projects has been a trusted regression testing framework that detects the errors introduced as quickly as possible before they can cause real trouble.
[Translate to Chinese:] Regression testing of software is recognized as industry best practice to meet high quality standards and reliability. My experience, and that of my colleagues, shows that model development is no exception. No matter how the model development process looks, regression testing allows every modification to the code to be trusted, thus reducing risks and increasing development efficiency. In addition when your Modelica model is required to run on multiple tools, cross testing becomes a necessity for your evolving code to work with all tools.
The form of the regression testing has varied greatly between projects since the testing criteria as well as the reason for testing differ from project to project. You can test for robustness of a library in one project while testing if a model still meets the requirements in another project. I find it useful to talk about three distinct testing use cases:
[Translate to Chinese:]
- Parameterization becomes a possible source for changes in behavior.
- The component modeler is not necessarily the system expert, meaning that the modeler and test engineer may need to be two different experts.
- The success criteria may differ from comparing against a reference by instead testing against requirements, for example verifying that a set of variables are within some required bounds.
[Translate to Chinese:] To meet these use cases, I believe there are a set of requirements that a testing framework for model development must fulfill:
[Translate to Chinese:] It is with these test use cases and requirements in mind that we at Modelon are developing a testing framework, the OPTIMICA Testing Toolkit. It allows for easy and efficient Modelica and FMI cross tool testing where you can easily compile a model in Dymola to have it simulate in for example the FMI Toolbox for MATLAB/ Simulink.
The OPTIMICA Testing Toolkit also includes a GUI to efficiently author the test suites and run them locally for result auditing, a screenshot can be seen in Figure 2.
If you already have test suites in place, conversion scripts from the most common Modelica test specifications like the experiment annotations are also included. There are also utilities in place for integration with Jenkins to help automate the cross testing. All to ensure that your model portfolio maintains its integrity over time and integrates seamlessly with different Modelica platforms.
What are your experiences from model regression testing? Would a product like OPTIMICA Testing Toolkit help you? Get in touch!
[Translate to Chinese:] -----------------------------------------
* I am assuming here that you are using a version control system like Git or Subversion.
[Translate to Chinese:]
[Translate to Chinese:] Johan Ylikiiskilä is a modeling and simulation engineer as well as a numerical analyst who has been working with Modelon AB since early 2011. He holds an MSc in Engineering Physics from Lund University. The last few years he has focused on the interaction between models and numerical integration algorithms, which led to him being the newly appointed product owner of the OPTIMICA Testing Toolkit.