Skip to content

Open OnDemand Web Portal

Open OnDemand is a web portal that provides access to CHPC file systems and clusters. It allows to view, edit, and upload and download files, create, edit, submit and monitor jobs, run GUI applications, and connect via SSH, all via a web browser and with a minimal knowledge of Linux and scheduler commands.

The Ohio Supercomputer Center, which develops Open OnDemand, has a detailed documentation page on all the features, most of which are functional at CHPC as well.

On this page

The table of contents requires JavaScript to load.

Connecting to Open OnDemand

To get started, go to https://ondemand.chpc.utah.edu and log in with your CHPC username and password.

Other Login Portals

After this, a front page is shown with a top menu like this:
Menu Bar

This menu provides access to the OnDemand tools

 

File Management and Transfer

The Files menu provides easy access to your personal files on CHPC's storage systems, including your home directory, group spaces, and scratch file systems.

You can also use this menu to quickly transfer files between your personal computer and CHPC. For example, to upload a file to your Home Directory, simply click on the Files menu, select Home Directory, and then click the blue Upload button. A window will pop up, allowing you to browse and select files from your computer.

OSC's File Transfer and Management help page provides details on its use. 

Job Submission

Jobs can be monitored, created, edited, and submitted through the Job Composer tool or through the Interactive Applications. OSC's Job Management help page provides more information on the use and features. This serves as a GUI alternative to SLURM scheduler commands.

You can monitor your jobs by going to Jobs > Active Jobs. Here, you'll see a list of all active jobs on CHPC clusters. To narrow the list, you can filter for Your Jobs and select a specific cluster.

Job Composer

Open OnDemand provides a Job Composer tool to help you write and submit SLURM batch scripts. To access it, click Jobs > Job Composer.

  • Templates: Choose a pre-defined template from the From Default Template drop-down menu to quickly set up a job for your chosen cluster and resource needs. We also have a variety of specific templates for certain software packages in the From Template menu.

  • Custom Scripts: If you have previously written your own SLURM scripts, you can use them as a template by selecting either From Specified Path or From Selected Job.

  • Need a Template? If you need a template for a package that isn't listed, please contact us at helpdesk@chpc.utah.edu.

Jobs can be submitted, stopped, and scripts deleted with the available SubmitStop, and Delete buttons.

Shell Access

The Clusters drop-down menu at the top provides links for shell access to all CHPC clusters via interactive nodes. The shell terminal  is opened in a new tab in your web browser.

Clusters available for shell access on Open Ondemand are:

  • Kingspeak
  • Lonepeak
  • Notchpeak
  • Granite
  • frisco1
  • frisco2
  • frisco3
  • frisco4
  • frisco5
  • frisco6
  • frisco7
  • frisco8

Job Monitoring

CHPC uses XDMoD to provide detailed reports on job metrics, including resource utilization.

You can access these reports at the following links:

The OnDemand front dashboard provides links to user utilization and job efficiency data:

XD Moderation

 

 

Interactive Applications

The Interactive Apps menu lets you launch specific graphical user interface (GUI) applications directly on CHPC compute nodes or Frisco nodes.

  • On clusters, these apps run as a scheduled SLURM job. This means they are allocated their own unique resources on a compute node, just like a job submitted from the command line.

  • On Frisco nodes, a new remote desktop login session is created for your application. This session is subject to the same resource limits as other Frisco login sessions.

The supported applications include:

IDES

  • Ansys Electronics Desktop
  • Ansys Workbench
  • Abaqus
  • COMSOL Multiphysics
  • Cambridge Structural Database
  • GPT4AII LLM UI
  • IDL
  • LibreOffice
  • Lumerical DEVICE Suite
  • MATLAB
  • Mathematica
  • QGIS
  • RELION
  • SAS
  • Stata 
  • Visual Studio Code

Servers

  • Jupyter
  • Nvidia Inference Microservice
  • Protein Binder Design Jupyter 
  • R Shiney App
  • RStudio Server
  • Spark Jupyter
  • VSCode Server
  • vLLM Microservice

 

 

Visualization

  • Coot
  • IDV
  • Meshroom
  • Paraview
  • VMD

 

Other applications may be requested viahelpdesk@chpc.utah.edu.

When using interactive applications in the general environment, all jobs are submitted by default to the notchpeak-shared-short cluster partition. This partition is designed for interactive use and has a default limit of 16 CPU cores and an 8-hour wall time per job. While this is the default, you can request any other cluster partition/allocation or Frisco node, with a few exceptions noted on the individual app's form page.

For the protected environment, the redwood-shared-shortaccount and partition are used as the default, with a maximum of 8 CPU cores and an 8-hour wall time.

Most interactive applications share a similar form and input fields, making it easy to navigate and submit jobs.

 

After selecting your job options, click the Launch button to start the session. A new window called My Interactive Sessions will open, showing that your job is being staged.

Once the job is ready, the window will update with a Launch button. Click this to open a new browser tab with your application running on the compute node. Remember, closing this browser tab will not end your session; it will continue to run until its specified time limit (walltime) is reached.

To end a session early, simply click the red Delete button.

Additional Features

  • View-Only Links: If supported, you can share a read-only link to your session with colleagues who also have access to the Open OnDemand portal.

  • SSH Access: To open a terminal session to the compute node, click on the blue host name box.

Troubleshooting

If you encounter an issue, please provide us with the Session ID. This is a link to the directory containing your job session files and will help us quickly identify and troubleshoot the problem.

Interative Desktop Example

Required Input Options

There are several required inputs, ClusterAccount and PartitionNumber of cores (per node), and Number of hours. Certain applications also include Program version and a few specific inputs. Below are the descriptions of these required inputs:

Cluster

To select the cluster to run the Interactive App on, choose one of the pull-down options. The default is notchpeak since it houses the notchpeak-shared-short partition which targets interactive Open OnDemand jobs.

Cluster

Account and partition

The SLURM account and partition to use. Options automatically change for the selected cluster.

Account and partition

Number of CPUs

The Number of CPUs field lets you specify how many processing cores your job should use. Unless your application is designed to run across multiple cores, you can leave the default value at 1.

Number of CPUs

Number of hours

The duration of the Interactive App session. Note that once the time expires, your session automatically ends. Any unsaved changes may not be saved depending on the application.

Number of hours

Advanced Input Options

Advanced options are listed after checking the Advanced options checkbox. If one of the advanced options has a non-default value, the form will not allow hiding of the advanced options. If you want to hide the advanced options, make sure to remove the value or set the value to default. The advanced options include:

Number of nodes

Number of compute nodes that the job will use. This is only active for programs that support multi-node execution, such as Abaqus, AnsysWB, AnsysEDT, Comsol, Lumerical, and Relion. Note that the Number of CPUs is listed as CPUs per node, so, total number of CPUs for job will be Number of nodes x Number of CPUs

Number of nodes

Memory per job (in GB)

Amount of memory the job needs. This refers to the total memory for the whole job, not per CPU. Default is 2 GB or 4 GB per CPU. If left at 0, the default is used

Memory per job

GPU type, count

Type of GPU and count of GPU devices to use for the job. Make sure that a partition that includes this GPU is selected, based on the ownership of the CPU (general = owned by CHPC; owner = owned by a specific research group). A floating point numerical precision of the GPU is also listed (SP or DP). See GPU node list for details on GPU features, owners, and counts per node. Specify the number of GPUs requested in the GPU count field

GPU type, count

Nodelist

list of nodes to run the job on (equivalent to the SLURM -w option). Useful for targeting specific compute nodes

Nodelist

Additional Environment

Additional environment settings to be imported into the job, such as loading modules, setting environment variables, etc., in the BASH syntax. Note that this works only for applications that run natively on CHPC systems, not on applications that run in containers

Additional environment

Constraints

Constraints for the job (-C SLURM option). Helpful for targeting less used owner-guest nodes to lower preemption chances, see this page for more information. Also can be used to request specific CPU architecture types, e.g. only AMD or only Intel CPUs. Use the or operator to include more options together.

Constraints

 

Last Updated: 8/20/25