-
Notifications
You must be signed in to change notification settings - Fork 300
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test FMI export from OMxxx in the CI #11414
Comments
OMJulia.jl itself is tested on Windows and Ubuntu on each commit, see e.g. https://github.com/OpenModelica/OMJulia.jl/actions/runs/6419309300. |
The CI can be copy-pasted from OMJulia.kl to OMPython I guess. |
@arun3688 could you help with that? |
@casella sure i can do that |
@casella I am in the process of adding support to test the FMI export, Can you give some examples on which models we should test the FMI export probably from |
I would suggest the following set, to begin with:
|
@casella I have now fixed the fmi_export test on the test cases provided, I have created a github actions which automatically runs the test weekly once on Friday at 12:00 am and will generate the test report. Offcourse we can change the scheduling to monthly once if needed. An example test report is available here To see the regressions test you can go to |
@arun3688 Can you add some badges to the readme? |
LGTM. Since it only takes 3 minutes to run, I wonder if we should just run it every day. @AnHeuermann what do you think? |
We can run it on every commit and in addition weekly/daily . |
The only thing I am missing is, it looks to me this is running on a separate CI system (github's) than ours. Was that done on purpose? If we do it on each commit, could it block it if it fails? |
@casella the fmi regression test is kept as separate action, because there are 2 other fmi export test done on each model and those models belong to MSL |
I'm pretty sure spending limits only apply to private repos, not public ones. From the first paragraph on that page:
|
@bilderbuchi great and thanks for the clarification |
@arun3688 if this is run on the GitHub infrastructure, I would run it on every commit, if possible. |
Yes, so we can use action setup-openmodelica to install OpenModelica on the GitHub Windows and Linux runners. For Linux we can recreate similar behavior in Docker containers. For Windows we would need a separate machine or some virtualization to make sure that none of the installed software to build OpenModelica can be found (to simulate a end-user Windows machine).
If we want we can declare all or some of the test as requirements for merging. Per default they are not required to pass.
The free infrastructure of GitHub is of course not very fast (500MB RAM and I think 2 CPU cores). If we increase the number of tests we need to wait longer or switch to our machines (GitHub self hosted runners or our usual Jenkins approach). |
Great!
I think they should. There is no reason why we should allow breaking FMI export on Windows.
We'll see, at some point we may also decide to pay for an upgraded service. At the moment I understand it takes a few minutes, and it's run in parallel to the stuff that runs on our servers, so I'd put it as a blocker on each commit. |
@casella Now the FMI_Regression tests will be run on every commit and also I have added seperate action which runs the FMI_Regression tests alone everyday at |
Excellent! I have a couple of questions:
|
I guess we can add such email notification in case of failures, I have to look into some documentation
The CI directly uses the |
Please do, I need to receive such email in order to catch regressions immediately.
I would say so. At the moment it is not a big deal, but if we add more tests in the future this could make a difference. Thanks! |
OMJulia.jl now has regression tests as well: Currently we test on all combinations of:
simulation, FMU export, FMU import and results of FMUs and simulations must be equal. The tests are a bit tricky and as long as at least one test is failing the exit status is failure. And they do fail at the moment for various reasons:
Also the tests are very slow. That is because I'm starting a new Julia process for each and every test, so the test can survive segmentation faults. |
Oh and @casella there is no e-mail notification available at the moment. We could add it with e.g. https://github.com/marketplace/actions/send-email (provided we add some login for our e-mail server to OpenModelica/OMJulia secrets). But at the moment all tests are failing. But if you need want to get some spam send by the usual mail address maybe @sjoelund knows what mail user we could use. |
@AnHeuermann Right now if the tests are failing, the user who created the configuration file gets the email notification, i don't know how this settings is done, I got the email notification yesterday |
Ideally, I would like to get e-mail notifications in case of regressions. Getting a daily dose of reports in the mailbox no matter what is not very useful. Is that possible? |
@casella It is possible I guess you can just dummy edit the workflow file https://github.com/OpenModelica/OMPython/blob/master/.github/workflows/FMITest.yml once and just make a push and according to the documentation https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/notifications-for-workflow-runs, the user who modifies the file will receive the notification, otherwise we have use some third party actions https://github.com/marketplace/actions/send-email |
Not easily. We would need to set up some data base and compare to the previous run. In that case we could also use OpenModelicaLibraryTesting script and and add cases for Julia and Python to it and add it to our Jenkins jobs. But this sound like way more work than it was to add the GitHub workflows. |
I guess we should do that at some point, but never mind for the moment. As long as there are not too many tests and they are all supposed to be successful, I can check every now and then. If past test results are kept, one can always look back when something goes wrong. |
We should constantly test that the code export APIs of OMPython, OMJulia, OMMatlab doesn't break. It is probably enough to do it on a selected set of models.
See also #11411.
The text was updated successfully, but these errors were encountered: