Skip to content

Ansys

Ansys is a suite of engineering analysis software packages including finite element analysis, structural analysis, computational fluid dynamics, explicit and implicit methods, and heat transfer. For University of Utah users, CHPC utilizes the College of Engineering License; for Utah State University researchers we utilize the USU licenses.

  • Version: 21.2, 22.2, 23.1
  • Machine: all clusters
  • Location:  /uufs/chpc.utah.edu/sys/installdir/ansys 

The Ansys suite includes numerous products, some of which we list below. The common launch interface, the Ansys Workbench, is also available through the Open OnDemand web portal, which provides an easy way to run the GUI inside of a job on CHPC clusters.

NOTE: Ansys 22.2 ships with Intel MPI version which does not work well with the network drivers on the Rocky Linux 8 operation system that our clusters run. Please, explicitly specify to use OpenMPI instead, with details below.

CFX

CFX is one of the computational fluid dynamics (CFD) programs that come with the Ansys suite. Run it as the following batch script:

#!/bin/bash
#SBATCH --time=1:00:00 # walltime, abbreviated by -t
#SBATCH --nodes=2      # number of cluster nodes, abbreviated by -N
#SBATCH -o slurm-%j.out-%N # name of the stdout, using the job number (%j) and the first node (%N)
#SBATCH --ntasks=4    # number of MPI tasks, abbreviated by -n
#SBATCH --account=notchpeak-shared-short     # account - abbreviated by -A
#SBATCH --partition=notchpeak-shared-short  # partition, abbreviated by -p

module load ansys/22.2

# specify work directory and input file names
export WORKDIR=`pwd`
# name of the input file
export INPUTNAME=Benchmark.def
# our input source is one of the CFX examples
export INPUTDIR=$ANSYS222_ROOT/CFX/examples

# cd to the work directory
cp $INPUTDIR/$INPUTNAME $WORKDIR
cd $WORKDIR

export SLURM_NODEFILE=nodes.$$
srun hostname -s | sort > $SLURM_NODEFILE
NODELIST=`cat $SLURM_NODEFILE`
NODELIST=`echo $NODELIST | sed -e 's/ /,/g'`
echo "Ansys nodelist="$NODELIST

cfx5solve -def $INPUTNAME -parallel -par-dist $NODELIST -start-method "Open MPI Distributed Parallel"

 Note that for the ansys/22.2 we set the parallel method (-start-method) to OpenMPI, as the default Intel MPI version does not work well on our clusters.

Fluent

One of the commonly used ansys packages is fluent, a computational fluid dynamics (CFD) software tool. Fluent includes well-validated physical modeling capabilities to deliver fast, accurate results across the widest range of CFD and multiphysics applications.

To use fluent via the Graphical Interface:

module load ansys
fluent

This will display the fluent launcher window,where you can select options such as 2D vs 3D; from here you  launch the fluent interface.  Note that this is run on the node you are on when you issue the fluent command. For any substantial computational work  you should not run fluent on the general interactive nodes (see our Acceptable Use of Interactive Node Policy) but should do so on a compute node of the Notchpeak cluster  by

salloc -n 1 -N 1 -p notchpeak-shared-short -A notchpeak-shared-short -t 8:00:00 

NOTE: Ansys 22.2 Fluent meshing does not start correctly from the Ansys Workbench. Please, start it explicitly as fluent -meshing -mpi=openmpi . Notice that we are also setting OpenMPI for the MPI parallelization, as Intel MPI version shipped with Ansys 22.2 does not work well on our clusters.

To use fluent via the batch script:

You can use the graphical interface to create your model and save the case file and then submit the simulation via a batch script.  Below is an example script that you can adapt to your needs. This script will run fluent on two kingspeak compute nodes.  See our slurm documentation page for details on the choice of the SBATCH directives.

#!/bin/bash
#SBATCH --time=1:00:00
#SBATCH --partition=kingspeak-guest
#SBATCH --account=owner-guest
#SBATCH --nodes 2
#SBATCH --ntasks 40
#SBATCH --constraint="c20"
#SBATCH -o slurm-%j.out-%N
#
#Below WORKDIR is the directory containing the input file
#INPUTFILE and OUTPUTFILE are the names of the input
#and output respectively
#
export WORKDIR=$HOME/fluent_test
export INPUTFILE=12.1.inp
export OUTPUTFILE=12.1.out
cd $WORKDIR
module load ansys
FLUENTNODES="$(scontrol show hostnames)"
FLUENTNODES=$(echo $FLUENTNODES | tr ' ' ',')
time
fluent 2ddp -t$SLURM_NTASKS -pib -cnf=$FLUENTNODES -g -i $INPUTFILE -mpi=openmpi > $OUTPUTFILE
time

Ansys Electronics Desktop

ANSYS Electronics Desktop (EDT) is the premier, unified platform for electromagnetic, circuit and system simulation. Gold-standard tools like ANSYS HFSS, Maxwell, Q3D Extractor, and Simplorer are built natively in the Electronics Desktop, which serves as a universal Pre/Post processor for these tools. For more details see the Ansys Electronics Desktop webpage.

CHPC utilizes College of Engineering license, which includes the Maxwell, HFSS, HFSS HPC and HFSS Desktop and which status can be seen here. If you are not in College of Engineering, please, contact CHPC about how you can use it. For Utah State University users, we use the USU license.

  • Version: 21.2
  • Machine: all clusters
  • Location:  /uufs/chpc.utah.edu/sys/installdir/ansysedt

There are two possible ways to run Ansys EDT. One is through its Graphical User Interface (GUI), and the other is through a batch script.

Using the GUI

The GUI approach allows one to design and build the simulation, and then run it from the GUI, however, the execution is limited to a single node. One also has to run the GUI via interactive job session, which may require waiting until resources (compute nodes) free up.

The easiest way to run the GUI is from the Open OnDemand web portal. This submits a job on a cluster and runs the GUI inside of this job without one needing to use any SLURM scheduler commands.

To run the GUI from the terminal, we need to first request the resource (compute node) using the SLURM scheduler, once they are available (command prompt returns) load the ansysedt module, and then run the ansysedt command. The best for this purpose is the notchpeak cluster's notchpeak-shared-short partition:

salloc -n 1 -N 1 -p notchpeak-shared-short -A notchpeak-shared-short -t 8:00:00 
module load ansysedt
ansysedt

 mygroup is the user's group name.

Once the GUI loads up, set up your problem and run it as a job on the local node. Typically, it should be fine to "Use automatic settings" under the "Compute Resources" tab of the "Submit Job" dialog window.

Using batch

Running simulation in the batch is preferred for several reasons. First, one can use more than one node, which may speed up the simulation or make it feasible due to larger memory availability. Second, job will wait in the SLURM queue till the resources are available, and start without the need of user interaction.

To run in batch, one has to first design the simulation using the GUI, and save it (e.g. as *.aedt file). Then modify a few items the run_ansysedt.slr script as noted in the script, in particular, number of nodes and tasks to run on, the input directory and name.  The script is also available at /uufs/chpc.utah.edu/sys/installdir/hfss/scripts/run_ansysedt.slr, which you can simply copy to your input directory and modify there.

Ansys EDT is parallelized on two levels over distributed MPI tasks and over threads, that is we can run multiple threads per MPI task and achieve higher level of parallelism. The script above figures out the number of threads per task to use based on the node count, task count and cores per node count to fully utilize available CPU cores on each node.

Once the input file and the script are ready, submit the job as

sbatch run_ansysedt.slr

You can monitor the status of the job with

squeue -u myUNID

myUNID> being your uNID.

Once the job starts, it is important to check the job log file (myInput.log) for a few things to ensure efficient execution. In particular, at the start of the log file, the program writes out user requested job distribution, for example, on 2 kingspeak nodes, 2 tasks per node:

Num tasks for machine kp045 = 2
Num cores for machine kp045 = 8
RAM limit for machine kp045 = -1
Num tasks for machine kp046 = 2
Num cores for machine kp046 = 8
RAM limit for machine kp046 = -1

 However, Ansys EDT then does its own evaluation of the simulation and adjusts the resources, which then writes to the log file, e.g.

Machines:
kp045: RAM: 90%, task0:4 cores, task1:4 cores
kp046: RAM: 90%, task0:4 cores, task1:4 cores

It is important to compare what you requested and what Ansys EDT decides to use and if there is a large mismatch in the requested and used resources, adjust the node/task count and resubmit the job. In this case, we would probably want to decrease the number of nodes from 2 to 1 and tasks from 4 to 2, effectively just running on one node.

Finally, since Ansys EDT uses its own distributed task management system, it is necessary to start some processes on all the job's nodes with some custom workarounds. In case these workarounds fail (we hope not but just in case), the simulation may run just on single node. Please, check the log file for messages that indicate this and that look like:

[error] Project:StripesOfGrapheneOnPolymiade, Design:StripesOfGraphenePolyVsTau2DSh1 (DrivenModal), Unable to locate or start COM engine on 'kp075' : Unable to reach AnsoftRSMService. Check if the service is running and if the firewall allows communication.
[warning] Project:StripesOfGrapheneOnPolymiade, Design:StripesOfGraphenePolyVsTau2DSh1 (DrivenModal), Distributed solve error on machine 'kp075'. Task dropped from simulation.

 If you see these messages, let us know.

Last Updated: 3/28/24