Skip to content

Open OnDemand web portal

Open OnDemand is a web portal that provides access to CHPC file systems and clusters. It allows to view, edit, upload and download files, create, edit, submit and monitor jobs, run GUI applications, and connect via SSH, all via a web browser and with a minimal knowledge of Linux and scheduler commands.

Ohio Supercomputer Center, which develops Open OnDemand, has a detailed documentation page on all the features, most of which are functional at CHPC as well.

Connecting to Open OnDemand

To connect, point your web browser to https://ondemand.chpc.utah.edu, and authenticate with CHPC user name and password. For the Protected Environment, go to https://pe-ondemand.chpc.utah.edu. After this, a front page is shown with a top menu like this:
OOD top menu

This menu provides access to the OnDemand tools.

XDMod integration

CHPC uses XDMoD to report job metrics such as resource utilization at xdmod.chpc.utah.edu, or pe-xdmod.chpc.utah.edu in the Protected Environment.  OnDemand front dashboard provides links to the user utilization data, however, user first to authenticate to the xdmod.chpc.utah.edu server.

OOD XDMoD not logged in

After clicking on the logged into XDMoD first link, the metrics data will display:

OOD XDMoD metrics

Note that due to technical glitch, in the job efficiency report currently 50% equals 100% in reality.

File Management and Transfer

The Files menu allows one to view and operate on files in user's home directory. OSC's File Transfer and Management help page provides details on its use. This is complementary to opening a FastX remote desktop and using that desktop's file manager, and using SCP based remote transfer tools like WinSCP or CyberDuck.

Job Management

Jobs can be monitored, created, edited and submitted with the job management tools under the Jobs menu. OSC's Job Management help page provides more information on the use and features. This serves as a GUI alternative to SLURM scheduler commands (Active Jobs menu item) and allows to write SLURM batch scripts with the help of pre-defined templates (Job Composer menu item). If you don't see a job template you would like, contact us at helpdesk@chpc.utah.edu.

Shell Access

The Clusters tab provides links to shell access to all CHPC clusters interactive nodes. The shell terminal is similar to many other tools that provide terminal access.

Interactive Applications

The Interactive Apps menu contains items to launch certain GUI applications interactively on CHPC cluster compute nodes or the Frisco nodes. The supported applications include a remote desktop, Ansys Workbench, Abaqus, COMSOL, IDV, Lumerical DEVICE, MATLAB, Mathematica, Jupyter Notebook, Paraview, RStudio Server and VMD. Other applications may be requested via helpdesk@chpc.utah.edu.

All applications subbmission default to notchpeak-shared-short cluster partition which is designed to provide interactive access to the compute resources, with limits 32 CPU cores and 8 hours wall time per job. However, any cluster partition/allocation or Frisco node can be requested this way, with a few exceptions noted in the respective app's launch page.

Below we give step by step instructions for select applications.

Interactive Desktop

The Interactive Desktop app allows to submit an interactive job to a cluster and attach to it via a remote interactive desktop connection. One thus gets a fully functioning desktop on a cluster compute node, or a Frisco node, from which can be run GUI applications.

To open the desktop, we first select menu Interactive Apps - Interactive Desktop, obtaining the following dialog where we can specify some job parameters, such as the cluster to run on, account, partition, walltime and node count (though, note that only one node's desktop will be accessible, therefore, keep the node count to one unless the program that you plan to run knows how to access the other nodes in the job, e.g. is an MPI program).
OOD desktop 1

After pushing the Launch button, the job gets submitted. A new window appears informing that the job is being staged. Once the job starts, the window is updated with the Interactive Desktop launch button (Launch Interactive Desktop). There is also possibility to get a view only web link to this interactive desktop (View Only button) that one can share with their colleagues (who have access to ondemand.chpc.utah.edu). One can also open an ssh session to the node with the job by clicking onto the blue host name box.
OOD Desktop 2

Once we click on the Launch Interactive Desktop, a new browser tab opens with the desktop at the compute node. This interactive session will stay alive if one closes the browser tab.

To remove the session, push the red Delete button, which cancels the job and the interactive session.

MATLAB

First choose the Interactive Apps - MATLAB menu item. A job submission window appears, where we can change parameters such as the cluster, MATLAB version, CPU core count, job walltime, and optionally memory and GPU requirements. MATLAB numerical calculations on vectors and matrices will benefit from parallelization up to roughly 16 CPU cores. Walltime should be chosen based on the estimated time the work will take.
OOD matlab submission

Once the Launch button is pushed, the job is submitted. If there are available resources (there should be on notchpeak-shared), the job will start, after which the intractive session will show that the job is running.
OOD Matlab job session

Pushing the Launch MATLAB button will open a new browser tab, which will start an interactive desktop session, followed by a launch of MATLAB GUI:
OOD matlab session

When done, close the MATLAB session browser tab and click the Delete button in the interactive sessions list to cancel the job.

Ansys Workbench

To run for example the Ansys Workbench, after choosing Interactive Apps - Ansys Workbench, we obtain the job submission window, where we fill in the needed parameters.
OOD Ansys 1

After hitting Launch we get the job status window, which gets updated when the job starts.
OOD Ansys 2

We open the interactive session with the Launch Ansys Workbench, to get the Ansys Workbench interface. There we can for example import a CFX input file, right click on the Solution workbench item to bring up the Run menu, and hit Start Run to run the simulation. Run progress is then interactively displayed in the Solver Manager window. This provides an user with the same experience as if run on a local workstation, but, instead, using one or more dedicated many-core cluster compute nodes.
OOD Ansys 3

Jupyter Notebook and Jupyter Lab

Jupyter notebooks allow for interactive code development. Although originally developed for Python, Jupyter notebooks now support many interpretative languages. Jupyter Lab provides a newer and better navigable user interface as compared to the older Jupyter Notebook interface. Jupyter Notebook or Lab inside OnDemand which runs on a cluster compute node using dedicated resources can be launched by choosing menu Interactive Apps -> Jupyter Notebook or Jupyter Lab. A job submission screen appears:
Jupyter job parameters

Set the job parameters. Unless using numerical libraries like NumPy, which are thread-parallelized, it is not adviseable to choose more than one CPU.

In the Environment Setup, one can either specify CHPC or own Python installation. This allows one to use their own Python stack, or even someone else's Python/Anaconda/Miniconda.

Note: If you are using your own Miniconda module, make sure that you have installed the Jupyter infrastructure by running conda install jupyter before attempting to launch any Jupyter job. For Jupyter Lab, run  conda install -c conda-forge jupyterlab.

Kernels for additional languages can be installed following their appropriate instructions, e.g. for R, the IRkernel. When installing the IRkernel, make sure to put the kernel into the directory where the Miniconda is installed with the  prefix option - by default it does not go there which may create conflicts with other Python versions. It is also a good idea to name the kernel so that multiple R versions can be supported, with the  name option. That is  IRkernel::installspec(prefix='/uufs/chpc.utah.edu/common/home/uNID/software/pkg/my_miniconda',name='ir402') .

Also note that if you are using Python Virtual Environments, you need to install the  ipykernel  in each virtual environment.

Once the job parameters are specified, hit the Launch to submit the interactive job. The job gets queued up and when it starts and the Jupyter is provisioned, following window appears:
Jupyter job ready

Click on the Connect to Jupyter button to open a new browser tab with the main Jupyter interface. Note that the Running Jupyter tab shows active notebooks, for example:
jupyter running notebooks

We have installed support for Python (CHPC Linux python/3.6.3 module), R (CHPC Linux R/3.6.1 module) and Matlab (R2019a). If you need other versions or support for other languages, contact helpdesk@chpc.utah.edu.

NOTE: The Jupyter server starts in user's home directory and only files in the home directory are accessible through the Jupyter file browser. To access files on other file systems (scratch, group space), create a symbolic link from this space to your home directory, e.g. ln -s /scratch/general/nfs1/u0123456/my_directory $HOME/my_scratch/. .

RStudio server

RStudio Server runs the RStudio interactive development environment inside of a browser. The OnDemand implementation allows to set up and launch the RStudio Server on a cluster compute node for dedicated resources, which allows to run more compute intensive R programs on the RStudio environment. To start RStudio Server job, first navigate to menu Interactive Apps - RStudio Server. A job parameters window appears:
Rstudio job parameters

Choose the appropriate job parameters, keeping in mind that R can internally thread parallelize vector based data processing, for which more than one CPU can be utilized. After clicking the Launch button, the cluster job is submitted and after the job is allocated resources, the following window appears:
Rstudio launch

Clicking on the Connect to RStudio Server button opens a new tab with the RStudio:
rstudio tab

To close the session, close the RStudio browser tab and push Delete to delete the running job.

NOTE: If you install new R packages and get an error g++: error: unrecognized command line option '-wd308', please modify your ~/.R/Makevars and remove all the flags that contain this option. We recommend this option in order to build packages with Intel compilers that are used to build CHPC's R. However, the R we use in Open OnDemand uses g++, therefore these flags are not valid.

NOTE: For technical reasons, RStudio Server currently does not work on the Frisco nodes. Please, contact us if you need this functionality.

Last Updated: 9/9/21