Unit testing Modelica component library?
I like how Openmodelica testing results look, see
- https://test.openmodelica.org/libraries/MSL_3.2.1/BuildModelRecursive.html
- click on a red cell: https://test.openmodelica.org/libraries/MSL_3.2.1/files/Modelica.Electrical.Analog.Examples.AD_DA_conversion.diff.html
- choose "javascript" for a failing signal: https://test.openmodelica.org/libraries/MSL_3.2.1/files/Modelica.Electrical.Analog.Examples.AD_DA_conversion.diff.resistor.v.html
No idea how they are doing it, though. Obviously some kind of regression testing is done, with previous results stored, but no idea if that is from some testing library or self-made.
In general, I find it kinda sad/suboptimal, that there isn't "the one" testing solution everybody can/should use (cf. e.g. nose or pytest in the python ecosystem), instead everybody seems to cook up their own solutions (or tries to), and all you find is some Modelica conference papers (often without a trace of implementation) or unmaintained library of unknown status.
Off the top of my head, I found/know of (some already linked in other answers here)
- OM testing
- JModelica testing (seems to only test for compiler errors?)
- Xogeny test (Some tests of the library itself fail for me. Also, does not seem to include a test runner)
- MoUnit (something by Fraunhofer, and not publically available - maybe in OneWind/OneModelica?)
- UnitTesting (apparently some kind of predecessor of XogenyTest. Also, no sources/implementation found)
- Optimica Testing Toolkit (apparently a commercial product by Modelon)
- SystemModeler VerificationTest
- buildingspy Python package, for regression testing among other things. Under the umbrella of the Berkeley Modelica Buildings Library. (Simulation only with Dymola)
- Modelica_Requirements library -- define requirements for simulation. (claimed to be open source and implemented, but apparently not available anywhere)
- ... I'm sure there are more I have forgotten or am not aware of
This seems like a pathological instance of https://xkcd.com/927/. It's kinda impossible for a (non-dev) user to know which of those to choose, which are actually good/usable/available/...
(Not real testing, but also relevant: parsing and semantic analysis using ANTLR: modelica.org/events/Conference2003/papers/h31_parser_Tiller.pdf)
Writing a .mos
script would be one way but there is also a small proof-of-concept library by Michael Tiller: XogenyTest which you could use as a basis.
I prefer using the .mos
script, it works pretty well when you further integrate your test framework into a continuous integration tool. BuildingPy is a good example of this, though it's not implemented in CI tools, it's still a good tool.
Here's a reference of a good framework design: UnitTesting: A Library for Modelica Unit Testing