Skip to content

Gaussian16

Gaussian 16 is the newest version of the Gaussian quantum chemistry package, replacing Gaussian 09.

  • Current revision: C.01 (previous versions of B.01 and A.03 still exist)
  • Machines: All clusters
  • Location of latest revision: /uufs/chpc.utah.edu/sys/installdir/gaussian16/C01

Note that are four different executable directories in this location -- corresponding to the presence of the SSE4, AVX, or AVX2 instruction sets.  They are:

  1. legacy for when none of the instruction sets listed below are available (no longer needed applicable any of the CHPC resources)
  2. SSE4 -- version for lonepeak nodes; 12 core nodes on ash
  3. AVX -- version for all tangent nodes; 16 and 20 core nodes on kingspeak and ash
  4. AVX2 -- version for 24 and 28 core nodes on kingspeak and ash; notchpeak nodes

As the newer processors have the older instruction set, these are backwards compatible, i.e., the SSE4 and legacy will  run all all nodes on the CHPC clusters, but the performance is impacted and the runs will be slower.  Therefore it is best if you use the optimum version.  This is addressed in the gaussian16 slurm batch script provided (see below).

Please direct questions regarding Gaussian to the Gaussian developers. The home page for Gaussian is http://www.gaussian.com/. A user's guide and a programmer's reference manual are available from Gaussian and the user's guide is also available online at the Gaussian web site.

IMPORTANT NOTE: The licensing agreement with Gaussian allows for the use of this program ONLY for academic research purposes and only for research done in association with the University of Utah. NO commercial development or application in software being developed for commercial release is permitted. NO use of this program to compare the performance of Gaussian16 with their competitors' products (i.e. Q-Chem, Schrodinger, etc) is allowed. The source code cannot be used or accessed by any individual involved in the development of computational algorithms that may compete with those of Gaussian Inc. If you have any questions concerning this, please contact Anita Orendt at anita.orendt@utah.edu.

In addition, in order to use gaussian16 you must be in the gaussian users group.  

To Use:

To set the environment to use G16 or GV6: 

module load gaussian16

Note that theSSE4 is set as the default as it will run on all CHPC resources. This is good for when you are doing testing, or running Gaussview.

You can, however, choose a different version by explicitly specifying the version, for example: 

module load gaussian16/SSE4.C01 for the SSE4 version

module load gaussian16/AVX.C01 for the AVX version

module load gaussian16/AVX2.C01  for the AVX2 version

Once you have the gaussian16 module loaded, you can start gauusview by: gv &

One additional change made with the installation of the new B01 version -- there is now a "gaussian" family defined in the module that makes it impossible to have two gaussian modules loaded at the same time.  This family contains all version of gaussian09 and gaussian16. If you have the module for one version loaded and then load the module for a different version, you will see a message that the version of the new module replaces the version in the originally loaded module.

An example SLURM  script, which queries the cluster and if needed the core count to choose which module to load, is provided: /uufs/chpc.utah.edu/sys/installdir/gaussian16/etc/rng16 

NOTE:  To use on the notchpeak AMD nodes, add the line: setenv PGI_FASTMATH_CPU sandybridge

In this script, you will need to set the appropriate partition, account,  and wall time, as well as use the constraints setting if using a partition with  multiple choices for number of cores, especially when this affects the choice of the executable to be used. You will also need to set the following four environment variables.  In choosing these settings, please consider the comments in the batch script as well as the considerations given below on this page.

setenv WORKDIR $HOME/g16project <-- enter the path to location of the input file FILENAME.com 
setenv FILENAME freq5k_3 <-- enter the filename, leaving off the .com extension 
setenv SCRFLAG LOCAL <-- either LOCAL, VAST, or NFS1 , see below
setenv NODES 2 <-- enter the number of nodes requested; should agree with nodes requested in the #SBATCH section

Some important considerations:

1) It is important that you use scratch space (set GAUSS_SCRDIR appropriately) for file storage during the job.  This choice is sent by the SCRFLAG choice made. Options are:

# LOCAL -- /scratch/local/ for use of space local to nodes
# VAST -- /scratch/general/vast  on all clusters 
# NFS1 -- /scratch/general/nfs1 on all clusters

With the current size of hard drives local to the compute nodes, in most cases using /scratch/local is the best option as it is the fastest.

2) You should always set the %mem variable in your gaussian input file. Please allow at least 64 MB of the total available memory on the nodes you will be using for the operating system. Otherwise, your job will have problems, possibly die, and in some cases cause the node to go down. See cluster documentation or use the appropriate slurm commands to see the amount of memory on the nodes.  

3) There are two levels of parallelization in Gaussian: shared memory and distributed.

As all of our compute nodes have multiple cores per node and nearly all of gaussian code makes efficient use of all cores, you will ALWAYS set %nprocs in your gaussian input file to the number of cores per node. 

If only using one node, NODES should be set to 1. For multi-jobs,  you must be sure to use the linda version of the executable.  This is the one the provided script will use if the NODES environment variable is set to something other than 1. Also note that with the larger core count nodes, the majority of gaussian jobs can be run on a single node.

4) when running in the node sharing mode DO NOT use the ntasks setting to specify the number of cores to use for the gaussian job.  Instead, use  #SBATCH --cpus-per-task=XX  where XX is the number of cores to be used for the run being submitted and should be the same as the value used in the gaussian input file nprocs setting.  Please remember that when running in the node sharing mode you also need to specify the amount of memory.  For more details see the node sharing page.

Last Updated: 4/10/24