Using GrADS2 on Shaheen

Introduction

Grid Analysis and Display System or GrADS is an interactive desktop tool that is used for easy access, manipulation, and visualization of earth science data.

Since it has quite a few dependencies, we found it more suitable to make it available on Shaheen by creating a container image with CentOS 7.4 as Linux operating system of choice and all the dependencies required by GrADS2/2.2.1.

At the moment singularity can only open and display X11 window on login nodes. On compute node however, GrADS functionalities can be called in non-GUI mode.
Please make sure your GUI based computations a not memory hungry because it can compromise other users' experience on that login node.

Get the container image

First get the container image from DockerHub to Shaheen filesystem

cd $HOME mkdir $HOME/tmp export SINGULARITY_TMPDIR=$HOME/tmp module load singularity singularity pull docker://krccl/grads:2.2.1

The above commands will end up grads_2.2.1.sif which is your Singularity image file.

GrADS requires some supplementary data dowloaded and pointed to by a couple of environment variables.

You can download the data from ftp://cola.gmu.edu/grads/data2.tar.gz and untar it in your /project

directory. The other file you will need is you User Defined Plug-in Table (UDPT). Its a text file that tells GrADS where the plugins are located. Since we are using a container, the following file will work for it. Let’s can this file udpt .

gxprint Cairo /software/grads-2.2.1/lib/libgxpCairo.so gxdisplay Cairo /software/grads-2.2.1/lib/libgxdCairo.so

The following two variable will then be set:

export SINGULARITYENV_GAUDPT=/path/to/your/udpt export SINGULARITYENV_GADDIR=/path/to/your/data/directory

Interactive session

To launch it interactively with a command prompt on a compute node:

Batch job

The command we ran in the interactive session can be coded in grads script for batch processing. Here is an example grads script that is our workload, let’s call it script.gs

Here is an example jobscript for submit our workload to SLURM and schedule a job to run on a single node on our behalf:

and the output looks similar as above: