To set the environment to use G16 or GV6:
module load gaussian16
An example SLURM script is provided:
In this script, you will need to set the appropriate patition, account, and walltime, as well as use the constraints setting if using a partition with multiple choices for number of cores, especailly when this affects the choice of the executable to be used. You will also need to set the following four environment variables. In choosing these settings, please consider the comments in the batch script as well as the considerations given below on this page.
setenv WORKDIR $HOME/g16project <-- enter the path to location of the input file FILENAME.com setenv FILENAME freq5k_3 <-- enter the filename, leaving off the .com extension setenv SCRFLAG LOCAL <-- either LOCAL, LPSERIAL (lonepeak only), GENERAL (all clusters), KPSERIAL (all clulsters) setenv NODES 2 <-- enter the number of nodes requested; should agree with nodes
Some important considerations:
1) It is important that you use scratch space (set GAUSS_SCRDIR appropriately for file storage during the job. This choice is sent by the SCRFLAG choice made. Options are:
# LOCAL -- /scratch/loca/ for use of space local to nodes
# GENERAL -- /scratch/general/lustre for the general CHPC lustre scratch
# KPSERIAL -- /scratch/kingspeak/serial for serial scratch on all clusters but lonepeak
# LPSERIAL -- /scratch/lonepeak/serial for serial scratch on lonepeak only
With the current size of hard drives local to the compute nodes, in most cases using /scratch/local is the best option as it is the fastest.
2) You should always set the %mem variable in your gaussian input file. Please allow at least 64 MB of the total available memory on the nodes you will be using for the operating system. Otherwise, your job will have problems, possibly die, and in some cases cause the node to go down. See cluster documentation or use the appropriate slurm commands to see the amount of memory on the nodes.
3) There are two levels of parallelization in Gaussian: shared memory and distributed.
As all of our compute nodes have multiple cores per node and nearly all of gaussian code makes efficient use of all cores, you will ALWAYS set %nprocs in your gaussian input file to the number of cores per node.
If only using one node, NODES should be set to 1. For multi-jobs, you must be sure to use the linda version of the executable. This is the one the provided script will use if the NODES environment variable is set to something other than 1.