-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prepare and write CGNS boundary connectivity and ZoneBC in one base #395
base: develop
Are you sure you want to change the base?
Conversation
Note this discussion started in another #393 but is now migrated to here. @jedbrown (that is not completing so I am not sure how Jed was on the earlier CGNS ticket but not here) contributed this picture in CNDA Slack and in the prior CGNS issue #296 that has been closed. From With the following description (that I will lace my questions into as I try to collect the data to throw at these nodes, I am looking for a bit more detail about what is needed in each node/leaf).
This example has 3 element groups in a single zone: PyramidElements, TetElements, and Prism Elements What FSFBrackets are right after PrismElements and before ZoneBC? Is this just a boundary element mesh set that got written before before the ZoneBC after which the rest? Was this by design or a glitch to the organization?
|
Notes: (see CPEX 0031) Weak BCs certainly belong on faces. Are they suggesting that these also should be given as node lists? |
Jed said:
|
Jed said (much more)
You'll see it if you click on the "folder" nodes. The typedef enum {
CGNS_ENUMV( ElementTypeNull ) =CG_Null,
CGNS_ENUMV( ElementTypeUserDefined ) =CG_UserDefined,
CGNS_ENUMV( NODE ) =2,
CGNS_ENUMV( BAR_2 ) =3,
CGNS_ENUMV( BAR_3 ) =4,
CGNS_ENUMV( TRI_3 ) =5,
CGNS_ENUMV( TRI_6 ) =6,
CGNS_ENUMV( QUAD_4 ) =7,
CGNS_ENUMV( QUAD_8 ) =8,
CGNS_ENUMV( QUAD_9 ) =9,
CGNS_ENUMV( TETRA_4 ) =10,
CGNS_ENUMV( TETRA_10 ) =11,
CGNS_ENUMV( PYRA_5 ) =12,
CGNS_ENUMV( PYRA_14 ) =13,
CGNS_ENUMV( PENTA_6 ) =14,
CGNS_ENUMV( PENTA_15 ) =15,
CGNS_ENUMV( PENTA_18 ) =16,
CGNS_ENUMV( HEXA_8 ) =17,
[...]
The ordering in
Mostly, though usually one writes the volume elements first so those start at
They're supposed to be present and will be written when using the API. Here's an example from the test suite (includes multi-zone unstructured)
This appears optional, and is not written in the unstructured test. I haven't seen an example in which it's used for something I can recognize as important.
We want that because it's the way BCs are identified. It is automatically written by the interface. If for some reason we need naming heuristics instead, we could handle that, but we may as well write compliant files.
There is
|
Thanks @jedbrown this is much more clear now. Thanks also to @jrwrigh who explained quite a bit of this to me when we met on Tuesday to discuss this. As you can see @cwsmith , I have made PR out of my WIP branch. Clearly all the real development described in the PR description still needs to be "done" but I hope you don't mind that we use the PR review utility as a communication platform to plan, accelerate, and document our progress. @cwsmith, I think I have copied everything from the prior ticket so if you want to delete those comments to keep the PR short, sweet, and focused just on the subject (relaxing/removing the requirement for C 14 in the CGNS development that then exposed SCOREC/core code that was not C 14 compliant). I think I have merged those changes into this branch which does build a chef executable with CGNS enabled (which of course is necessary for this PR's development). |
DEVELOPMENT ROADMAP
Note, I am not super clear on point 3. @jedbrown How does CEED-PHASTA run continuation work? Are files geometry and boundary condition files written at the end of a run that allow continuation of that run with the exiting partition or, are we currently pulling the solution back to a serial CGNS file for visualization and thus needing work (and a plan) to save more data (in parallel?) to be used as a checkpoint that avoids redoing the partition and boundary condition creation? |
On the last point, it will be possible to read any cgns checkpoint as mesh and initial condition. If the number of ranks matches, we can continue with the same partition, otherwise we'll repartition and continue. |
@jedbrown, I ask this question because, historically, for PHASTA, we have always created volume elements for the boundary elements with the first I guess this whole comment can be TLD'R to the following questions for @jedbrown:
|
Just to be possibly pedantically clear, does |
After meeting with @cwsmith and @jrwrigh this morning, and then thinking a bit about it during my bike ride in to campus it occurs to me that we do have another route that we could consider. I had dismissed it earlier for a variety of reasons but I guess I want to air it out here for support/criticism. The idea is to write let The downside is that I am less than sure of what CGNS wants us to "write" into a file to express parallel. This is also what is paralyzing my decision to update the CGNS writer that is already in SCOREC-core since it hard for me to design it without a vision of how it gets written in parallel. Maybe I need to say a little more about how the data is laid out for @jedbrown to help me clear that mental/writers block. After partitioning the MDS database, there is a distinct set of elements on each part and they retain classification against a real geometric model (if it exists) or a topological model if it does not. What @cwsmith pointed out that it is not so simple to have O(10k) processes write their "chunk" of:
So, at long last my question: given the above data layout on say 10-100k ranks, can the provided CGNS library file writers work with the data distributed among ranks like this (assuming we have filled the arrays we pass to it with the correct global numbers for that rank's data? Parsing that question into each category:
OR I guess that is my real question. Are those CGNS writers going to handle the parallel writing leaving our job to be only getting the data from rank-local numbering to the linearly increasing numbering?
Obviously a strong C coder could do the same within chef but I am not sure we have one of those. Further, obviously the serial version of this code is pretty simple AND follows the examples given here. |
I'm not sure if I'm reading your whole post correctly, but this is how we do it in PETSc. Each rank creates a connectivity array for the elements it owns, and the collective write is done in these two lines (the first is metadata, the second is array data). https://gitlab.com/petsc/petsc/-/blob/main/src/dm/impls/plex/cgns/plexcgns2.c?ref_type=heads#L791-792 The vertex numbers are with respect to the node numbering used for |
… to compute a flat connectivity array transposed to CGNS needs and the same transpose plus reduction from volume connectivity to surface connectivity for boundary elements. Compiles but not tested as we still need to modify the actual writing function in this file open and write a CGNS file. Further, nothing done yet for parallel with regard to global numbering
@jedbrown thanks for the links to how PETSc sets up these files and writes data. For now I have abandoned the clunky idea of making a post-chef converter from posix to CGNS and returned to this branch to attempt to get these files written directly from Chef so we (@jrwrigh and I) should be able to mimic the PETSc writing closely. In the commit directly above, I believe I have the functions added to get the data for region connectivity and a surface connectivity in the form needed. Further, I think the coordinates are collected into an array that matches the expectations of CGNS (all x, then all y then all z though I am not sure why somone put a comment saying FORTRAN indexing??? I suppose they mean to say flat array that runs over points with the first linear index first and the dim of point as the second index not 1 bases as we or at least I usually think of as Fortran indexing. C centric folks don't always appreciate the value of flat arrays over arrays of arrays.)
so it should be simple (serial for now) to throw three continuous slices of this array at the CGNS writer. The @jrwrigh do you want to take a cut at the CGNS file write stuff (set up for parallel since that is surely the way we will go) that so that we can get a serial test of coordinates volume connectivity and then add the boundary element lists? That would free me to work on creating the map from on-rank numbering to global numbering so that we could then follow up that test with a parallel test soon? I am available to help when if that is needed. |
…on-rank-node-number]
…or doing the owned-node data condensation (coordinates and later solution), and wrote a potential coordinate condensation (though it might be better to change it to what the PETSc CGNS writer does...looked at that too late to copy in first pass
I think I have all the data in flat 1-D arrays for coordinates, volume connectivity, and boundary element connectivity to be chucked into a CGNS file in a parallel way. I am pretty terrible at function interfaces so I pretty much pushed everything that needs to be shared into the output data structure (added stuff in Work left to do:
I will likely plug away at 2-4. |
…ough it does not yet actually write a CGNS file in the function writeCNGS
As I start to add the calls to CGNS to write the CGNS file, I am noticing some places to easily get confused:
|
…eems to think cgsize_t are int which will never fly for our meshes with global nubmering so probably need to find a way to tell Spack or other I want long long int there...I am also uncertain if SCOREC convention to call this gcorp_t is going to play nice with CGNS calling it cgsize_t but pushing this up to get help
…tation, and future paths to improvement
13 tests added and they pass. A couple of old ones fail. I only built chef. Do these tests need more stuff built?
|
Note the new tests use |
Thank you for adding the tests.
We should use
I'm not sure about those. Would you please paste the log/output ( |
my bad. These are cases I have not built. The chef wildcard picked them up.
|
OK. No worries. Thanks for confirming. |
I built everything and tested. I know why CGNS tests fail (he built it for 32bit ints and has exits when that is not true).
Fixed now
Here is what swapdoubles does
|
(base) kjansen@viz003: /projects/tools/SCOREC-core/buildCGNS_OneBase $ mpirun -np 1 test/swapDoubles (base) kjansen@viz003: /projects/tools/SCOREC-core/buildCGNS_OneBase $ ctest -R swap 100% tests passed, 0 tests failed out of 1 Total Test time (real) = 0.31 sec after adding an I think everything (except prior CGNS tests that require 32 bit ints) is now passing so I don't know what that conflict on test/swapDoubles.cc is all about. I don't know how you want to resolve the prior CGNS tests. Personally, I think they should be removed until someone has the time to make them use |
I built up a 32 big CGNS library to check to be sure that our code would pick up
as I am not all that familiar with cgnsdiff, I also applied it to two hex meshes make with the 64 bit version. In that case it caught that the arrays sizes were different and the data types were the same. This is not a proof that the 32 bit is correct but it at least produced arrays of the same size and number. This plus paraview is pretty convincing. HMMM. All the tests pass. I was expecting to get fails but I guess we we are not checking that its output is empty???? Would it make sense to change the test to something like this and then add some check that it is actually an empty result? Googling |
Yeah, it's probably only checking that the command exits with a non-zero exit code. So doing something like
Interesting. Perhaps |
|
vs. the same command on two files with the same data types but different actual data
|
To be clear, |
…, added a couple of boundary element diagnostic fields, replaced cgnsdiff with hdf5diff which returns 1 if different for testing.
…BoundaryCellRank to be a FaceCenter field. This branch also provides a hacky way to get around ParaView only being able to visualize the first nodal field in the CGNS file by circulating the file-node creation order.
…er functions to keep long functions under 105 lines.
Prepare and write CGNS boundary connectivity and ZoneBC in one base
This is a WIP as we hope to coordinate the code development through this PR. Currently SCOREC/core prepares a broad variety of meshes, boundary conditions, initial conditions, and partitions of the same for the PHASTA flow solver with the application chef. That data is currently written for each part of the partition in what is referred to as PHASTA POSIX file format.
CEED-PHASTA, being built on PETSc already has CGNS writing and growing CGNS reading capability. The PR seeks to leverage meshes already in the mds database (that are usually classified against a model that has initial condition and boundary condition attributes). The developments of the PR will further leverage the routines that extract the volume mesh (coordinates and connectivity), initial conditions on the volume mesh entities, and the boundary element mesh (one boundary element connectivity per surface). Once this data is extracted into data structures, the existing CGNS writing capability will be modified to write all the above data in a single "base" so that it can be read by PETSc and applied in the CEED-PHASTA solver.
The first test will be a simple flow through a box but more extensive tests will be developed as more capabilities are added.
Here is the required cmake configuration. Our testing was done within Spack using gcc 11.2, cgns (latest 4.4.0), ompi 4.1.1 (and whatever dependencies they required). We also used spack versions of zoltan and parmetis. Simmetrix is not really required but 10 of the 13 tests used a Simmetrix mesh and the version of simModSuite was 18.0-230111.