Skip to content

Open OnDemand Web Portal

Open OnDemand is a web portal that provides access to CHPC file systems and clusters. It allows to view, edit, and upload and download files, create, edit, submit and monitor jobs, run GUI applications, and connect via SSH, all via a web browser and with a minimal knowledge of Linux and scheduler commands.

The Ohio Supercomputer Center, which develops Open OnDemand, has a detailed documentation page on all the features, most of which are functional at CHPC as well.

On this page

The table of contents requires JavaScript to load.

Connecting to Open OnDemand

To get started, go to https://ondemand.chpc.utah.edu and log in with your CHPC username and password.

Other Login Portals

After this, a front page is shown with a top menu like this:
Menu Bar

This menu provides access to the OnDemand tools

 

File Management and Transfer

The Files menu provides easy access to your personal files on CHPC's storage systems, including your home directory, group spaces, and scratch file systems.

You can also use this menu to quickly transfer files between your personal computer and CHPC. For example, to upload a file to your Home Directory, simply click on the Files menu, select Home Directory, and then click the blue Upload button. A window will pop up, allowing you to browse and select files from your computer.

OSC's File Transfer and Management help page provides details on its use. 

Job Submission

Jobs can be monitored, created, edited, and submitted through the Job Composer tool or through the Interactive Applications on Open on Demand. OSC's Job Management help page provides more information on the use and features. This serves as a GUI alternative to SLURM scheduler commands.

You can monitor your jobs by going to Jobs > Active Jobs. Here, you'll see a list of all active jobs on CHPC clusters. To narrow the list, you can filter for Your Jobs and select a specific cluster.

Job Composer

Open OnDemand provides a Job Composer tool to help you write and submit SLURM batch scripts. To access it, click Jobs > Job Composer.

  • Templates: Choose a pre-defined template from the From Default Template drop-down menu to quickly set up a job for your chosen cluster and resource needs. We also have a variety of specific templates for certain software packages in the From Template menu.

  • Custom Scripts: If you have previously written your own SLURM scripts, you can use them as a template by selecting either From Specified Path or From Selected Job.

  • Need a Template? If you need a template for a package that isn't listed, please contact us at helpdesk@chpc.utah.edu.

Jobs can be submitted, stopped, and scripts deleted with the available SubmitStop, and Delete buttons.

Interactive Applications

The Interactive Apps menu lets you launch specific graphical user interface (GUI) applications directly on CHPC compute nodes or Frisco nodes.

  • On clusters, these apps run as a scheduled SLURM job. This means they are allocated their own unique resources on a compute node, just like a job submitted from the command line.

  • On Frisco nodes, a new remote desktop login session is created for your application. This session is subject to the same resource limits as other Frisco login sessions.

The supported applications as of August 2025 include:

IDES

  • Ansys Electronics Desktop
  • Ansys Workbench
  • Abaqus
  • COMSOL Multiphysics
  • Cambridge Structural Database
  • GPT4AII LLM UI
  • IDL
  • LibreOffice
  • Lumerical DEVICE Suite
  • MATLAB
  • Mathematica
  • QGIS
  • RELION
  • SAS
  • Stata 
  • Visual Studio Code

Servers

  • Jupyter
  • Nvidia Inference Microservice
  • Protein Binder Design Jupyter 
  • R Shiney App
  • RStudio Server
  • Spark Jupyter
  • VSCode Server
  • vLLM Microservice

 

 

Visualization

  • Coot
  • IDV
  • Meshroom
  • Paraview
  • VMD

 

Other applications may be requested viahelpdesk@chpc.utah.edu.

When using interactive applications in the general environment, all jobs are submitted by default to the notchpeak-shared-short cluster partition. This partition is designed for interactive use and has a default limit of 16 CPU cores and an 8-hour wall time per job. While this is the default, you can request any other cluster partition/allocation or Frisco node, with a few exceptions noted on the individual app's form page.

For the protected environment, the redwood-shared-shortaccount and partition are used as the default, with a maximum of 8 CPU cores and an 8-hour wall time.

Most interactive applications share a similar form and input fields, making it easy to navigate and submit jobs.

 

After selecting your job options, click the Launch button to start the session. A new window called My Interactive Sessions will open, showing that your job is being staged.

Once the job is ready, the window will update with another Launch button. Click this to open a new browser tab with your application running on the compute node. Remember, closing this browser tab will not end your session; it will continue to run until its specified time limit (walltime) is reached.

To end a session early, simply click the red Delete button.

Additional Features

  • View-Only Links: If supported, you can share a read-only link to your session with colleagues who also have access to the Open OnDemand portal.

  • SSH Access: To open a terminal session to the compute node, click on the blue host name box.

Troubleshooting

If you encounter an issue, please provide us with the Session ID. This is a link to the directory containing your job session files and will help us quickly identify and troubleshoot the problem.

Interative Desktop Example

Required Input Options

There are several required inputs, ClusterAccount and PartitionNumber of cores (per node), and Number of hours. Certain applications also include Program version and a few specific inputs. Below are the descriptions of these required inputs:

Cluster

To select the cluster to run the Interactive App on, choose one of the pull-down options. The default is notchpeak since it houses the notchpeak-shared-short partition which targets interactive Open OnDemand jobs.

Cluster

Account and partition

The SLURM account and partition to use. Options automatically change for the selected cluster.

Visit this page to learn more about SLURM accounts and partitions

Account and partition

Number of CPUs

The Number of CPUs field lets you specify how many processing cores your job should use. 

Number of CPUs

Number of hours

The duration of the Interactive App session. Note that once the time expires, your session automatically ends. Any unsaved changes may not be saved depending on the application.

Number of hours

 

Advanced Input Options

Advanced options are listed after checking the Advanced options checkbox. If one of the advanced options has a non-default value, the form will not allow hiding of the advanced options. If you want to hide the advanced options, make sure to remove the value or set the value to default. The advanced options include:

Number of Nodes

Number of compute nodes that the job will use. This is only active for programs that support multi-node execution, such as Abaqus, AnsysWB, AnsysEDT, Comsol, Lumerical, and Relion. Unless your application is designed to run across multiple nodes, you can leave the default value at 1.

Note that the Number of CPUs is listed as CPUs per node, so, total number of CPUs for job will be:  Number of nodes x Number of CPUs

Number of nodes

Memory per Job (in GB)

Amount of memory the job needs. This refers to the total memory for the whole job, not per CPU. Default is 2 GB or 4 GB per CPU. If left at 0, the default is used

Memory per job

GPU Type

Type of GPU devices to use for a job.

For more details on GPU types, owners, and the number of GPUs per node, please consult the GPU Node List.

GPU type, count

Nodelist

The NodeList option allows you to specify a list of particular compute nodes you want your job to run on.

This is a useful feature for targeting specific hardware or machines. It is the equivalent of the -w option in SLURM.

Nodelist

Additional Environment

Additional environment settings to be imported into the job, such as loading modules, setting environment variables, etc., in the BASH syntax. Note that this works only for applications that run natively on CHPC systems, not on applications that run in containers

Additional environment

Constraints

The Constraints option allows you to set specific requirements for the nodes your job will run on. This can be very useful for several purposes:

  • Avoiding Preemption: You can target less-used owner-guest nodes, which can lower the chances of your job being preempted.

  • Available Node Features: You can request specific CPU architectures, such as "AMD" or "Intel". (Can specify the amount of memory or CPU's used 

For more information, refer to the Constraint Usage documentation.

Constraints Usage Example

Constraints are limited to one line, the image above shows an example of requesting 64 GB or 128 GB of memory and 16 cores

 

Access & Job Management

Shell Access

The Clusters drop-down menu at the top provides links for shell access to all CHPC clusters via interactive nodes. The shell terminal  is opened in a new tab in your web browser.

Clusters available for shell access on Open Ondemand are:

  • Kingspeak
  • Lonepeak
  • Notchpeak
  • Granite
  • frisco1
  • frisco2
  • frisco3
  • frisco4
  • frisco5
  • frisco6
  • frisco7
  • frisco8

Job Monitoring

CHPC is transitioning its job metrics reporting platform. We are currently phasing out XDMoD as the primary tool for detailed insights into job metrics and resource utilization.

We are actively working to implement Portal which will provide users with enhanced resource and utilization reporting capabilities. Updates regarding the new platform's availability will be announced soon.

 

 

 

Things to Note

  • When using Jupyter Notebook on CHPC, you can leverage your own Miniforge environments. This is the recommended alternative to Miniconda, as it defaults to the community-driven conda-forge channel, providing a more up-to-date and open-source package management experience.
  • Within the Jupyter Notebook application, you can use different kernels to execute code in various languages, such as Python and R. You can also create and manage your own environments to install specific packages and dependencies for your projects.
  • Jupyter/Miniforge Setup: Before you can launch a Jupyter job using a custom Miniforge environment, you must first install the necessary Jupyter infrastructure. This is done by running conda install jupyter (or conda install jupyterlab for Jupyter Lab) within your activated Miniforge environment.

  • RShiny App: For technical reasons, RShiny App does not currently work on the Frisco nodes. You will need to use a cluster partition for those jobs.
  • RStudio Server: For technical reasons, CHPC builds of the RStudio Server only work on the frisco nodes. You will need to use a cluster partition for those jobs.
  • Internet Access: Interactive sessions on compute nodes, including those for RStudio, do not have direct internet access. If you need to install new packages or download files, you must do so from a login node or visualization node.
Last Updated: 9/26/25