Skip to content

APSIS-ANALYSIS/PERIGEE

PERIGEE is a nonlinear dynamic finite element analysis code for multiphysics analysis. The code has been developed with the goal of providing an object-oriented framework for parallel implementation of multiphysics problems. Copyright and licensing information can be found in LICENSE.

Table of Contents

Install

We recommend using UNIX-like operating systems, such as Linux or MacOS, for the code development. If you are a Windows user, you may refer to this for a detailed install guide. The following instructions are based on a Linux Ubuntu system, and there could be minor differences for Mac systems.

  1. A quick guide for library installation is here and a more advanced guide is there. After the libraries are all properly installed, proceed to step 2.

Notice that VTK is typically installed as a shared library in a non-standard folder. One therefore has to edit the LD_LIBRARY_PATH environmental variable for the linker to locate the .so files. Open the .bash_profile or .bashrc file and edit the LD_LIBRARY_PATH variable. See below is an example with my VTK installed at /Users/juliu/lib/VTK-8.2.0/.

export LD_LIBRARY_PATH=/Users/juliu/lib/VTK-8.2.0/lib:$LD_LIBRARY_PATH

For more information on this environmental variable, see here.

  1. After the libraries are installed, add a configuration file named as system_lib_loading.cmake in the conf folder. You may find a file called system_lib_loading_example.cmake, which is an example. In this file, you will have to specify the paths for the external libraries,
  • Set VTK_DIR to the VTK library location (e.g. /home/jliu/lib/VTK-7.1.1-shared).
  • Set PETSC_DIR to the PETSc library location (e.g. /home/jliu/lib/petsc-3.11.3).
  • Set PETSC_ARCH to the value used in PETSc installation (e.g. arch-linux2-c-debug).
  • Set METIS_DIR to the METIS library location (e.g. /home/jliu/lib/metis-5.0.3).
  • Set HDF5_DIR to the HDF5 library location (e.g. /home/jliu/lib/hdf5-1.8.16).
  • Set CMAKE_C_COMPILER to $PETSC_DIR/$PETSC_ARCH/bin/mpicc
  • Set CMAKE_CXX_COMPILER to $PETSC_DIR/$PETSC_ARCH/bin/mpicxx

After the edit, save the CMake file and rename it as system_lib_loading.cmake, and you have your own configuration file set up. Notice that we have the file name system_lib_loading.cmake added in .gitignore, meaning that git will not track this file. You may want to keep a copy of this file out of PERIGEE, because when you switch to other branches, PERIGEE may not keep a copy of this file.

Build

First, create a folder build out of the source directory. Enter that folder, and run the following commands to build, as an example, a suite of heat equation solvers.

CMake ~/PERIGEE/examples/linearPDE/

CMake will print some information on the screen. Pay a look at the variable CMAKE_BUILD_TYPE. If its value is Debug, this means your code will be compiled in the debug mode. If you want to make the code faster, run CMake as follows,

CMake ~/PERIGEE/examples/linearPDE/ -DCMAKE_BUILD_TYPE=Release

Now the value of CMAKE_BUILD_TYPE is set to Release. The code will be compiled in the optimized mode. For more information about the compiler, please refer to this. Of course, a fully optimized code requires that your external libraries, especially PETSc, are compiled in the optimized mode also. Refer to the advanced guide for more info on building libraries in a release mode. After CMake generates the Makefile for you, you need to run the following command to compile the source code.

make

Of course, you may add -j2 to run Makefile with 2 threads. If the make complains about the auto keyword or the nullptr, your default compiler does not support C 11. You may add SET(CMAKE_CXX_STANDARD 11) in your .cmake configuration file to enforce the C 11 standard.

Tutorial

In general, one has to go through the following steps for simulation.

  • Obtain the mesh in vtu/vtp format from a front-end code, e.g., SimVascular or Gmsh.
  • Run a preprocessor to load the mesh, assign boundary conditions, and partition the mesh. The preprocessor is a serial code and may need to be run on a large memory cluster node if you are dealing with a very large problem.
  • Run a finite element analysis code to solve the partial differential equations. The solutions will be saved on disk in the binary format.
  • Run a preprocessor for postprocessing. This step re-partition the mesh to make preparations for postprocessing, such as visualization, error calculation, etc. Similar to the preprocessor, this routine should be run in serial and may consume a lot memory if your mesh is fine. With this routine, we are able to run the postprocessing routine with different number of CPUs. For example, we run FEM analysis with, say, 360 CPUs; visualizing the solution is much less intensive in computing and may only need, say, 24 CPUs. So you should repartition the domain into 24 sub-domains in this step.
  • Run a postprocessor in parallel. Often, this step refers to the visualization of the solutions. The visualzation routine will read the binary solution files and write the data into (parallel) vtu/vtp format. Then the data can be visualized in Paraview.

Simulation Samples

The vortex-induced vibration of an elastic plate with Re $\approx$ 3 $\times$ 10^4. The mesh consists of 18 million linear tetrahedral elements for the fluid and 0.7 million elements for the solid. The variational multiscale formulation provides the LES technique in the flow problem, and the time integration is based on the generalized-α scheme.

Pulmonary CFD

A fluid-structure interaction simulation of a pulmonary model is performed using the unified continuum and variational multiscale formulation. The model and mesh are prepared by W. Yang. The solid model is fully incompressible and is numerically modeled via the residual-based variational multiscale formulation.

Pulmonary FSI

References

Theory

Solver technology

Verification & Validation

HPC

  • D. Goldberg, What every computer scientist should know about floating-point arithmetic.
  • U. Drepper, What every programmer should know about memory.

C

Contact

Dr. Ju Liu, [email protected], [email protected]

Acknowledgement

National Natural Science Foundation of China, Grant numbers 12172160, 12472201

Shenzhen Science and Technology Program, Grant number JCYJ20220818100600002