OpenFOAM 4.x installation on Shaheen


Either run on cdl5 login node or on a compute node of Shaheen as SLURM Job.

Clone repositories from GitHub for OpenFOAM foundation version:

git clone -b version-4.1 OpenFOAM-4.1 git clone -b version-4.1 ThirdParty-4.1


Load the environment

module swap PrgEnv-cray PrgEnv-gnu module swap gcc module load boost module load cgal module list

Make few changes in configuration files of OpenFOAM to recognize the environment:


Introduce a Cray MPICH option for MPI library:

  1. Edit the file etc/ and add Cray MPICH case in the switch expression before the default case *)

CRAYMPICH) export FOAM_MPI=mpi-system export MPI_ROOT=${MPICH_DIR} export MPI_ARCH_PATH=$MPI_ROOT export MPI_ARCH_INC="-I$(echo $MPI_ROOT/include)" export MPI_ARCH_LIBS="" export MPI_ARCH_FLAGS="" ;;

Now edit etc/ to modify the compiler selection. Since we are compiling for 64 bit version on x86_64 architecture, following edits will apply:

Create a wmake rules directory with appropriate compilers and flags. We will call it linux64cray

Change the following in the c and c++ files:

Also create a new file in this very directory wmake/rules/linux64cray/ with the name mplibCRAYMPICH


For ptscotch to use the correct MPI C compiler, we need to make modification in its Makefile in ThirdParty-4.1/etc/wmakeFiles/scotch/


Now we modify the etc/bashrc to pick these options:

For ThirdParty libraries we may want to leverage some which are installed on the system already.

Change in file: CGAL

Lastly, before we activate the environment and get ready to compile, we need to make a change in datatype declaration in one of the header files. We are compiling with gcc/11.2.0 which has become more pedantic with standard conformance of C/C++ code.

If you had run a build previous in this directory, it is necessary to run a clean build:
wclean all
also run ThridPart/Allclean
to make sure that all the intermediate directories are cleaned.

Now we can start the compilation process:

If you want to submit a build job to SLURM to run on a compute node, the following jobscript is an example: