-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: global t8_finalize
for freeing all allocated t8code objects in one go.
#1295
Comments
Making use of smart-pointers in the future would very probably solve your problem. But it would require to use smart-pointers for every ressource that t8code is using, which might not always be the best choice. |
In general, this is possible and already thought of such a solution. However, such an approach would interfere with the lifetime of such objects - basically keeping them alive till the application shuts down. This is not desirable since Julia's garbage collector should be able to destroy t8code mesh/forest objects at its own discretion. |
Exactly these destroy functions are called in the finalizers on Julia side. However, as already pointed out, the decision when these finalizers are called, is up to the garbage collector. |
We just had a lengthy discussion about this in the t8code developer's meeting. Globally tracking memory allocations throughout the whole t8code code base would be a laborious, error-prone task with potentially accompanying performance degradation. Fortunately, our suspicion is that just keeping track of allocated MPI shared memory covers a lot of use cases already. Providing a global clean-up routine for this is a feasible task. |
We just had a lengthy discussion about this in the t8code developer's meeting.
Globally tracking memory allocations throughout the whole t8code code base would be a laborious, error-prone task with potentially accompanying performance degradation.
I agree, there are risks to the approach. And the user really needs to know
when not to use this functionality (that is, most of the time).
Fortunately, our suspicion is that just keeping track of allocated MPI shared memory covers a lot of use cases already. Providing a global clean-up routine for these is a feasible task.
This sounds good. In p4est, we may limit the functionality to a few
top-level objects: connectivity, forest, ghost, mesh. It's still not
trivial but not unthinkable to put it together. Please file an issue should
this become pressing.
|
Hi guys! This sounds like a tricky-to-get-right feature @jmark, and I really don't know if it is possible to ship something that will make everyone happy...
I don't claim to know the right way to make this work, but here are some of the lessons I have learned (sometimes the hard way) when designing Python APIs for the C code I work with (which uses t8code and indirectly has to answer the same kind of questions):
The point I'm trying to articulate here is that this issue should be considered in the broader discussion of |
The point I'm trying to articulate here is that this issue should be considered in the broader discussion of
t8code's migration towards more modern C , because whether the migration is meant to be a small surface refactoring or will include semantic revisions and redesigns, will greatly influence how to proceed here. For example, if the C API is to be supported in the long run, as a lower-level alternative to the main C API, this effectively retrains a lot what can be done to address this issue _within_ t8code. I would expect that a pure C wrapper of t8code would be needed on the Trixi side to address the impedance mismatch in that case.
Thanks for the analysis (and trying not to derail from t8code towards p4est
too much). Supposing we have the explicit malloc/free type of wrapped
library, where some objects may depend on others and must be destructed in
reverse order to clean up the dependency graph. Let us assume the
dependencies are only known to the library and their developers, and they
would be happy to add the necessary glue (exposed and/or internal). Would
there be a minimal set of functions that the library should implement to
support a GC wrapper (ideally compatible with various scripting frontends)?
|
@cburstedde I would say that yes, since what you suggest effectively boils down - if I understand your comment correctly - to supporting two disjoint APIs:
If you can guarantee that the two are never mixed together, which I believe is achievable with a good design, then I don't see a reason why this approach wouldn't work. This strategy can effectively be described as having the wrapper be part of the t8code repo. In C , building such an abstraction layer on top of the existing t8code API is not very difficult since (except for a few functions) it is quite straightforward to wrap the current API into a handful of RAII types. Indeed, since all the objects are already reference counted, enforcing correct deallocation order is really just a matter of making sure that each objects holds a reference to its direct parent: elements keep their forest alive, forests keep their cmesh alive, and cmeshes keep the global module handle alive basically. If you wish to keep a pure C codebase however, I really don't know how feasible this is... |
Feature request
Is your feature request related to a problem? Please describe.
It would be helpful if t8code provides a global
t8_finalize
routine which frees all allocated objects by t8code in a proper and clean manner. Note, this is in contrast tosc_finalize
which only does a mere check if all allocated objects are freed and prints a warning and/or aborts the program.What is the problem there?
A strong use case is the interoperability with the programming language Julia respectively Trixi.jl and especially MPI.jl.
Julia's garbage collector finalizes objects non-deterministically. That means that MPI usually gets finalized before t8code related objects get finalized when Trixi.jl shuts down. This leads to nasty crashes/segfaults since t8code allocates MPI related objects, e.g. shared memory arrays.
Describe the solution or feature you'd like
There is a
MPI.add_finalize_hook!()
for exactly such scenarios described above. It would be very useful to have at8_finalize
routine which could be called by this hook.In order to have such a feature, t8code needs a proper allocation/deallocation tracking (like a managed memory pool). When designing the C code base maybe the C runtime already provides such a feature.
Describe alternatives you've considered
As of now, t8code related objects in Trixi.jl are finalized explicitly before shutting down. This, however, is not how Julia is supposed to be used.
Estimated priority
"Priority: low" Should be solved eventually
I see an increasing demand for such a feature with growing user base and when coupling t8code with more and more languages and frameworks.
Additional context
Trixi.jl issues a warning here when there a still un-freed t8code objects when shutting down: https://github.com/trixi-framework/Trixi.jl/blob/91eaaf68e95cdba8062a1e607172c8505a0a2503/src/auxiliary/t8code.jl#L35
Here is an example how Trixi.jl finalized t8code objects explicitly: https://github.com/trixi-framework/Trixi.jl/blob/91eaaf68e95cdba8062a1e607172c8505a0a2503/examples/t8code_3d_dgsem/elixir_euler_ec.jl#L93
The text was updated successfully, but these errors were encountered: