Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test FMI export from OMxxx in the CI #11414

Open
casella opened this issue Oct 20, 2023 · 28 comments
Open

Test FMI export from OMxxx in the CI #11414

casella opened this issue Oct 20, 2023 · 28 comments
Assignees

Comments

@casella
Copy link
Contributor

casella commented Oct 20, 2023

We should constantly test that the code export APIs of OMPython, OMJulia, OMMatlab doesn't break. It is probably enough to do it on a selected set of models.

See also #11411.

@AnHeuermann
Copy link
Member

OMJulia.jl itself is tested on Windows and Ubuntu on each commit, see e.g. https://github.com/OpenModelica/OMJulia.jl/actions/runs/6419309300.
We can add a test to export FMUs from OMJulia.jl (and some others as well) and run them with each available omc version at some time interval (each week/month/...).

@AnHeuermann
Copy link
Member

The CI can be copy-pasted from OMJulia.kl to OMPython I guess.

@casella
Copy link
Contributor Author

casella commented Oct 24, 2023

@arun3688 could you help with that?

@arun3688
Copy link
Contributor

@casella sure i can do that

@arun3688
Copy link
Contributor

@casella I am in the process of adding support to test the FMI export, Can you give some examples on which models we should test the FMI export probably from MSL

@casella
Copy link
Contributor Author

casella commented Nov 2, 2023

I would suggest the following set, to begin with:

  • Modelica.Blocks.Examples.Filter
  • Modelica.Blocks.Examples.RealNetwork1
  • Modelica.Electrical.Analog.Examples.CauerLowPassAnalog
  • Modelica.Electrical.Digital.Examples.FlipFlop
  • Modelica.Mechanics.Rotational.Examples.FirstGrounded
  • Modelica.Mechanics.Rotational.Examples.CoupledClutches
  • Modelica.Mechanics.MultiBody.Examples.Elementary.DoublePendulum
  • Modelica.Mechanics.MultiBody.Examples.Elementary.FreeBody
  • Modelica.Fluid.Examples.PumpingSystem
  • Modelica.Fluid.Examples.TraceSubstances.RoomCO2WithControls
  • Modelica.Clocked.Examples.SimpleControlledDrive.ClockedWithDiscreteTextbookController

@arun3688
Copy link
Contributor

arun3688 commented Nov 6, 2023

@casella I have now fixed the fmi_export test on the test cases provided, I have created a github actions which automatically runs the test weekly once on Friday at 12:00 am and will generate the test report. Offcourse we can change the scheduling to monthly once if needed. An example test report is available here
https://github.com/OpenModelica/OMPython/actions/runs/6772519939,

To see the regressions test you can go to https://github.com/OpenModelica/OMPython/actions and you can see the FMITest in the left side under workflow and you can click the FMITest and see the scheduled runs

@AnHeuermann
Copy link
Member

@arun3688 Can you add some badges to the readme?

FMITest Test

@casella
Copy link
Contributor Author

casella commented Nov 6, 2023

LGTM. Since it only takes 3 minutes to run, I wonder if we should just run it every day. @AnHeuermann what do you think?

@AnHeuermann
Copy link
Member

We can run it on every commit and in addition weekly/daily .

@casella
Copy link
Contributor Author

casella commented Nov 7, 2023

The only thing I am missing is, it looks to me this is running on a separate CI system (github's) than ours. Was that done on purpose? If we do it on each commit, could it block it if it fails?

@arun3688
Copy link
Contributor

arun3688 commented Nov 8, 2023

@casella the fmi regression test is kept as separate action, because there are 2 other fmi export test done on each model and those models belong to MSL https://github.com/OpenModelica/OMPython/blob/master/tests/test_FMIExport.py, So i hope we can keep the FMI regression test separate, Now i have added support to test the FMI Regressions both in linux and windows see the sample report here https://github.com/OpenModelica/OMPython/actions/runs/6797486440 , I windows it took more time to set up the omc compared to linux, but the overall test time is same. The only thing we need to discuss is to run every day once or once a week something like that , also we can add the latest nightly builds to test the regression and also i noticed that there are some pricing for the github actions https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions, for free service we can run action for 2000 minutes a month

@bilderbuchi
Copy link

i noticed that there are some pricing for the github actions https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions, for free service we can run action for 2000 minutes a month

I'm pretty sure spending limits only apply to private repos, not public ones. From the first paragraph on that page:

GitHub Actions usage is free for standard GitHub-hosted runners in public repositories, and for self-hosted runners. For private repositories, each GitHub account receives a certain amount of free minutes and storage for use with GitHub-hosted runners, depending on the account's plan.

@arun3688
Copy link
Contributor

arun3688 commented Nov 8, 2023

@bilderbuchi great and thanks for the clarification

@casella
Copy link
Contributor Author

casella commented Nov 8, 2023

@arun3688 if this is run on the GitHub infrastructure, I would run it on every commit, if possible.

@AnHeuermann
Copy link
Member

The only thing I am missing is, it looks to me this is running on a separate CI system (github's) than ours. Was that done on purpose?

Yes, so we can use action setup-openmodelica to install OpenModelica on the GitHub Windows and Linux runners. For Linux we can recreate similar behavior in Docker containers. For Windows we would need a separate machine or some virtualization to make sure that none of the installed software to build OpenModelica can be found (to simulate a end-user Windows machine).
Also less servers to handle our self.

If we do it on each commit, could it block it if it fails?

If we want we can declare all or some of the test as requirements for merging. Per default they are not required to pass.

@arun3688 if this is run on the GitHub infrastructure, I would run it on every commit, if possible.

The free infrastructure of GitHub is of course not very fast (500MB RAM and I think 2 CPU cores). If we increase the number of tests we need to wait longer or switch to our machines (GitHub self hosted runners or our usual Jenkins approach).

@casella
Copy link
Contributor Author

casella commented Nov 9, 2023

Yes, so we can use action setup-openmodelica to install OpenModelica on the GitHub Windows and Linux runners. For Linux we can recreate similar behavior in Docker containers. For Windows we would need a separate machine or some virtualization to make sure that none of the installed software to build OpenModelica can be found (to simulate a end-user Windows machine). Also less servers to handle our self.

Great!

If we want we can declare all or some of the test as requirements for merging. Per default they are not required to pass.

I think they should. There is no reason why we should allow breaking FMI export on Windows.

@arun3688 if this is run on the GitHub infrastructure, I would run it on every commit, if possible.

The free infrastructure of GitHub is of course not very fast (500MB RAM and I think 2 CPU cores). If we increase the number of tests we need to wait longer or switch to our machines (GitHub self hosted runners or our usual Jenkins approach).

We'll see, at some point we may also decide to pay for an upgraded service. At the moment I understand it takes a few minutes, and it's run in parallel to the stuff that runs on our servers, so I'd put it as a blocker on each commit.

@arun3688
Copy link
Contributor

@casella Now the FMI_Regression tests will be run on every commit and also I have added seperate action which runs the FMI_Regression tests alone everyday at 9 am which tests our stable and nightly builds version . A generated sample report https://github.com/OpenModelica/OMPython/actions/runs/6824990589

@casella
Copy link
Contributor Author

casella commented Nov 11, 2023

Excellent! I have a couple of questions:

  1. What happens if any of the tests is broken? Do we get some email notification?
  2. I looked at the execution times, they seem a bit long to me, even considering that the hardware provided by GitHub is limited in power. If you look at our CI report, the total build time for Modelica.Blocks.Examples.Filter is 2.67 seconds, and the total simulation time is 0.08 s. How comes that generating and running the FMU takes 10 times more? Is it possible to have some breakdown of that time?

@arun3688
Copy link
Contributor

  1. What happens if any of the tests is broken? Do we get some email notification?

I guess we can add such email notification in case of failures, I have to look into some documentation

2. I looked at the execution times, they seem a bit long to me, even considering that the hardware provided by GitHub is limited in power. If you look at our CI report, the total build time for Modelica.Blocks.Examples.Filter is 2.67 seconds, and the total simulation time is 0.08 s. How comes that generating and running the FMU takes 10 times more? Is it possible to have some breakdown of that time?

The CI directly uses the buildModelFMU() but in OMPython we use the ModelicaSystem() API to first build the model and this should take some time and then use mod.convertMo2FMU() to generate the fmu, so the totaltime = buildModel() buildModelFMU(), but if i directly use the sendExpression() to build the fmu, i am sure the execution time will be improved a lot. MAy be i should directly use the sendExpression("buildModelFMU(Modelica.Blocks.Examples.Filter)") for the FMI Regression tests

@casella
Copy link
Contributor Author

casella commented Nov 11, 2023

  1. What happens if any of the tests is broken? Do we get some email notification?

I guess we can add such email notification in case of failures, I have to look into some documentation

Please do, I need to receive such email in order to catch regressions immediately.

  1. I looked at the execution times, they seem a bit long to me, even considering that the hardware provided by GitHub is limited in power. If you look at our CI report, the total build time for Modelica.Blocks.Examples.Filter is 2.67 seconds, and the total simulation time is 0.08 s. How comes that generating and running the FMU takes 10 times more? Is it possible to have some breakdown of that time?

May be i should directly use the sendExpression("buildModelFMU(Modelica.Blocks.Examples.Filter)") for the FMI Regression tests

I would say so. At the moment it is not a big deal, but if we add more tests in the future this could make a difference.

Thanks!

@AnHeuermann
Copy link
Member

OMJulia.jl now has regression tests as well:
https://github.com/OpenModelica/OMJulia.jl/actions/workflows/regressionTests.yml

Currently we test on all combinations of:

  • OS: Windows, Ubuntu
  • Julia: v1.8, v1.9
  • OpenModelica: nightly, stable, (v1.21)

simulation, FMU export, FMU import and results of FMUs and simulations must be equal.

The tests are a bit tricky and as long as at least one test is failing the exit status is failure. And they do fail at the moment for various reasons:

Also the tests are very slow. That is because I'm starting a new Julia process for each and every test, so the test can survive segmentation faults.

@AnHeuermann
Copy link
Member

Oh and @casella there is no e-mail notification available at the moment. We could add it with e.g. https://github.com/marketplace/actions/send-email (provided we add some login for our e-mail server to OpenModelica/OMJulia secrets).

But at the moment all tests are failing. But if you need want to get some spam send by the usual mail address maybe @sjoelund knows what mail user we could use.

@arun3688
Copy link
Contributor

@AnHeuermann Right now if the tests are failing, the user who created the configuration file gets the email notification, i don't know how this settings is done, I got the email notification yesterday(20-11-2023) regarding the test failures and it is related to OpenModelica setup https://github.com/arun3688/OMPython/actions/runs/6927918624, But however the issue got fixed automatically and todays(21-11-2023) test run all the tests passed

@casella
Copy link
Contributor Author

casella commented Nov 22, 2023

Ideally, I would like to get e-mail notifications in case of regressions. Getting a daily dose of reports in the mailbox no matter what is not very useful. Is that possible?

@arun3688
Copy link
Contributor

@casella It is possible I guess you can just dummy edit the workflow file https://github.com/OpenModelica/OMPython/blob/master/.github/workflows/FMITest.yml once and just make a push and according to the documentation https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/notifications-for-workflow-runs, the user who modifies the file will receive the notification, otherwise we have use some third party actions https://github.com/marketplace/actions/send-email

@AnHeuermann
Copy link
Member

Ideally, I would like to get e-mail notifications in case of regressions. Getting a daily dose of reports in the mailbox no matter what is not very useful. Is that possible?

Not easily. We would need to set up some data base and compare to the previous run. In that case we could also use OpenModelicaLibraryTesting script and and add cases for Julia and Python to it and add it to our Jenkins jobs. But this sound like way more work than it was to add the GitHub workflows.

@casella
Copy link
Contributor Author

casella commented Nov 23, 2023

I guess we should do that at some point, but never mind for the moment. As long as there are not too many tests and they are all supposed to be successful, I can check every now and then. If past test results are kept, one can always look back when something goes wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants