To set the environment to use G16 or GV6:
module load gaussian16
Note that the E64 is set as the default as it will run on all CHPC resources. This is good for when you are doing testing, or running Gaussview.
You can, however, choose a different version by explicitly specifying the version, for example:
module load gaussian16/E64.B01
One additional change made with the installation of the new B01 version -- there is now a "gaussian" family defined in the module that makes it impossible to have two gaussian modules loaded at the same time. This family contains all version of gaussian09 and gaussian16. If you have the module for one version loaded and then load the module for a different version, you will see a message that the version of the new module replaces the version in the originally loaded module.
An example SLURM script, which queries the cluster and if needed the core count to
choose which module to load, is provided:
NOTE: To use on the notchpeak AMD nodes, add the line:
setenv PGI_FASTMATH_CPU sandybridge
In this script, you will need to set the appropriate patition, account, and walltime, as well as use the constraints setting if using a partition with multiple choices for number of cores, especially when this affects the choice of the executable to be used. You will also need to set the following four environment variables. In choosing these settings, please consider the comments in the batch script as well as the considerations given below on this page.
setenv WORKDIR $HOME/g16project <-- enter the path to location of the input file FILENAME.com setenv FILENAME freq5k_3 <-- enter the filename, leaving off the .com extension setenv SCRFLAG LOCAL <-- either LOCAL, LPSERIAL (lonepeak only), GENERAL (all clusters), KPSERIAL (all clulsters) setenv NODES 2 <-- enter the number of nodes requested; should agree with nodes
Some important considerations:
1) It is important that you use scratch space (set GAUSS_SCRDIR appropriately) for file storage during the job. This choice is sent by the SCRFLAG choice made. Options are:
# LOCAL -- /scratch/loca/ for use of space local to nodes
# GENERAL -- /scratch/general/lustre for the general CHPC lustre scratch
# KPSERIAL -- /scratch/kingspeak/serial for serial scratch on all clusters
# NFS1 -- /scratch/general/nfs1 for serial scratch on all clusters
With the current size of hard drives local to the compute nodes, in most cases using /scratch/local is the best option as it is the fastest.
2) You should always set the %mem variable in your gaussian input file. Please allow at least 64 MB of the total available memory on the nodes you will be using for the operating system. Otherwise, your job will have problems, possibly die, and in some cases cause the node to go down. See cluster documentation or use the appropriate slurm commands to see the amount of memory on the nodes.
3) There are two levels of parallelization in Gaussian: shared memory and distributed.
As all of our compute nodes have multiple cores per node and nearly all of gaussian code makes efficient use of all cores, you will ALWAYS set %nprocs in your gaussian input file to the number of cores per node.
If only using one node, NODES should be set to 1. For multi-jobs, you must be sure to use the linda version of the executable. This is the one the provided script will use if the NODES environment variable is set to something other than 1.
4) when running in the node sharing mode DO NOT use the ntasks setting to specify
the number of cores to use for the gaussian job. Instead, use
#SBATCH --cpus-per-task=XX where XX is the number of cores to be used for the run being submitted and should
be the same as the vaule used in the gaussian input file nprocs setting. Please remember
that when running in the node sharing mode you also need to specify the amount of
memory. For more details see the node sharing page.