Here is an example build of WRFChem 3.8.x. We are going to use the Intel Compiler with Cray compiler wrappers, some prebuilt dependency libraries both installed on the system, both for convenience and performance.

Since we are compiling the code to run on compute nodes, we will either do this either on a login node that is Intel Haswell microarchitecture or our build script can be a jobscript which we can submit and build on a compute node. In the following we compile on cdl5 login node which has same processes same as the compute node microarchitecture.

Login to cdl5:

ssh cdl5

Load the environment:

module swap PrgEnv-cray PrgEnv-intel
module load cray-netcdf cray-parallel-netcdf
module load flex/2.6.4 bison/3.0.4 jasper
module load craype-hugepages4M
export NETCDF=$NETCDF_DIR
export PNETCDF=$PARALLEL_NETCDF_DIR
export WRFIO_NCD_LARGE_FILE_SUPPORT=1
export WRF_CHEM=1
export WRF_KPP=1
export FLEX_LIB_DIR=${FLEX_DIR}/lib
export YACC="${BISON_DIR}/bin/yacc -d"
export JASPERLIB=${JASPER_DIR}/lib
export JASPERINC=${JASPER_DIR}/include

Go to source code folder and run ./configure. For parallelization with pure MPI, choose option 50 (dm | distributed memory) and if you plan to use OpenMP for multithreading on node, choose 51 (dm+sm | distributed memory+shared memory). Here we are choosing 50:

./configure
checking for perl5... no
checking for perl... found /usr/bin/perl (perl)
Will use NETCDF in dir: /opt/cray/pe/netcdf/4.7.4.4/INTEL/19.1
Will use PNETCDF in dir: /opt/cray/pe/parallel-netcdf/1.12.1.4/INTEL/19.1
PHDF5 not set in environment. Will configure WRF for use without.
building WRF with chemistry option
building WRF with KPP chemistry option
Will use 'time' to report timing information
$JASPERLIB or $JASPERINC not found in environment, configuring to build without grib2 I/O...
------------------------------------------------------------------------
Please select from among the following Linux x86_64 options:

  1. (serial)   2. (smpar)   3. (dmpar)   4. (dm+sm)   PGI (pgf90/gcc)
  5. (serial)   6. (smpar)   7. (dmpar)   8. (dm+sm)   PGI (pgf90/pgcc): SGI MPT
  9. (serial)  10. (smpar)  11. (dmpar)  12. (dm+sm)   PGI (pgf90/gcc): PGI accelerator
 13. (serial)  14. (smpar)  15. (dmpar)  16. (dm+sm)   INTEL (ifort/icc)
                                         17. (dm+sm)   INTEL (ifort/icc): Xeon Phi (MIC architecture)
 18. (serial)  19. (smpar)  20. (dmpar)  21. (dm+sm)   INTEL (ifort/icc): Xeon (SNB with AVX mods)
 22. (serial)  23. (smpar)  24. (dmpar)  25. (dm+sm)   INTEL (ifort/icc): SGI MPT
 26. (serial)  27. (smpar)  28. (dmpar)  29. (dm+sm)   INTEL (ifort/icc): IBM POE
 30. (serial)               31. (dmpar)                PATHSCALE (pathf90/pathcc)
 32. (serial)  33. (smpar)  34. (dmpar)  35. (dm+sm)   GNU (gfortran/gcc)
 36. (serial)  37. (smpar)  38. (dmpar)  39. (dm+sm)   IBM (xlf90_r/cc_r)
 40. (serial)  41. (smpar)  42. (dmpar)  43. (dm+sm)   PGI (ftn/gcc): Cray XC CLE
 44. (serial)  45. (smpar)  46. (dmpar)  47. (dm+sm)   CRAY CCE (ftn/gcc): Cray XE and XC
 48. (serial)  49. (smpar)  50. (dmpar)  51. (dm+sm)   INTEL (ftn/icc): Cray XC
 52. (serial)  53. (smpar)  54. (dmpar)  55. (dm+sm)   PGI (pgf90/pgcc)
 56. (serial)  57. (smpar)  58. (dmpar)  59. (dm+sm)   PGI (pgf90/gcc): -f90=pgf90
 60. (serial)  61. (smpar)  62. (dmpar)  63. (dm+sm)   PGI (pgf90/pgcc): -f90=pgf90

For nesting, here we choose default option 1:

Compile for nesting? (1=basic, 2=preset moves, 3=vortex following) [default 1]: 

After the configuration is done, some more changes need to be include in the resulting configure.wrf

CFLAGS_LOCAL    =       -w -O2 -ip $(OPTAVX)
LDFLAGS_LOCAL   =       -ip $(OPTAVX)

FCOPTIM         =       -O2
FCREDUCEDOPT    =       $(FCOPTIM)
FCNOOPT         =       -O0 -fno-inline -no-ip

FCBASEOPTS_NO_G =       -ip -fp-model precise -w -ftz -fno-alias -align all $(FORMAT_FREE) $(BYTESWAPIO) #-vec-report6

The above concludes the configuration part. We then move to a parallel build of the code.

./compile -j 12 em_real &> compile.log

You may track the status of build in compile.log.

note

Please be sure to reproduce the environment settings in your SLURM jobscript.

If there are any issue, please discuss on help@hpc.kaust.edu.sa

Please be sure to reproduce the environment settings in your SLURM jobscript.

If there are any issue, please discuss on help@hpc.kaust.edu.sa