Gaussian16

Gaussian 16 is the newest version of the Gaussian quantum chemistry package, replacing Gaussian 09.

  • Current revision: A.03
  • Machines: All clusters
  • Location of latest revision: /uufs/chpc.utah.edu/sys/installdir/gaussian16/A03

Note that are four different executable directories in this location -- corresponding to the presence of the SSE4, AVX, or AVX2 instruction sets.  They are:

  1. E6L for when none of the instruction sets listed above are available (legacy, no longer needed on any of the CHPC resources)
  2. E64 for when SSE4 is available  (ember and lonepeak nodes)
  3. E6A for when AVX is available ( all tangent nodes, 16 and 20 core nodes on kingspeak and ash)
  4. E6B for when AVX2 is available (24 and 28 core nodes on kingspeak and ash)

As the newer processors have the older instruction set, these are backwards compatible, i.e., the E6L and the E64 will  run all all nodes on the CHPC clusters, but the performance is impacted and the runs will be slower.  Therefore it is best if you use the optimum version.  This is addressed in the gaussian16 slurm batch script provided (see below).

Please direct questions regarding Gaussian to the Gaussian developers. The home page for Gaussian is http://www.gaussian.com/. A user's guide and a programmer's reference manual are available from Gaussian and the user's guide is also available online at the Gaussian web site.

IMPORTANT NOTE: The licensing agreement with Gaussian allows for the use of this program ONLY for academic research purposes and only for research done in association with the University of Utah. NO commercial development or application in software being developed for commercial release is permitted. NO use of this program to compare the performance of Gaussian16 with their competitors' products (i.e. Q-Chem, Schrodinger, etc) is allowed. The source code cannot be used or accessed by any individual involved in the development of computational algorithms that may compete with those of Gaussian Inc. If you have any questions concerning this, please contact Anita Orendt at anita.orendt@utah.edu.

In addition, in order to use gaussian16 you must be in the gaussian users group.  

To Use:

To set the environment to use G16 or GV6: 

module load gaussian16

An example SLURM  script is provided: /uufs/chpc.utah.edu/sys/installdir/gaussian16/etc/rng16 

In this script, you will need to set the appropriate patition, account,  and walltime, as well as use the constraints setting if using a partition with  multiple choices for number of cores, especailly when this affects the choice of the executable to be used. You will also need to set the following four environment variables.  In choosing these settings, please consider the comments in the batch script as well as the considerations given below on this page.

setenv WORKDIR $HOME/g16project <-- enter the path to location of the input file FILENAME.com 
setenv FILENAME freq5k_3 <-- enter the filename, leaving off the .com extension 
setenv SCRFLAG LOCAL <-- either LOCAL, LPSERIAL (lonepeak only), GENERAL (all clusters), KPSERIAL (all clulsters) 
setenv NODES 2 <-- enter the number of nodes requested; should agree with nodes

Some important considerations:

1) It is important that you use scratch space (set GAUSS_SCRDIR appropriately for file storage during the job.  This choice is sent by the SCRFLAG choice made. Options are:

# LOCAL -- /scratch/loca/ for use of space local to nodes
# GENERAL -- /scratch/general/lustre for the general CHPC lustre scratch 
# KPSERIAL -- /scratch/kingspeak/serial for serial scratch on all clusters but lonepeak
# LPSERIAL -- /scratch/lonepeak/serial for serial scratch on lonepeak only

With the current size of hard drives local to the compute nodes, in most cases using /scratch/local is the best option as it is the fastest.

2) You should always set the %mem variable in your gaussian input file. Please allow at least 64 MB of the total available memory on the nodes you will be using for the operating system. Otherwise, your job will have problems, possibly die, and in some cases cause the node to go down. See cluster documentation or use the appropriate slurm commands to see the amount of memory on the nodes.  

3) There are two levels of parallelization in Gaussian: shared memory and distributed.

As all of our compute nodes have multiple cores per node and nearly all of gaussian code makes efficient use of all cores, you will ALWAYS set %nprocs in your gaussian input file to the number of cores per node. 

If only using one node, NODES should be set to 1. For multi-jobs,  you must be sure to use the linda version of the executable.  This is the one the provided script will use if the NODES environment variable is set to something other than 1. 

 

 

Last Updated: 3/28/17