Skip to content

Guide to Getting Started at the CHPC

Welcome to the Center for High Performance Computing (CHPC) at the University of Utah, the university's central hub for research computing and data. This page provides a simple, step-by-step guide designed specifically for new users who want to leverage CHPC's diverse computing resources to advance their research. Whether you're new to high-performance computing or just new to CHPC, this guide will help you get started.

On this page

The table of contents requires JavaScript to load.

Accounts

To get started using CHPC systems, apply for a CHPC account using our online application form

  Note: You must be provisioned at the University of Utah campus level with a valid, active uNID (University of Utah ID number) in order to be provisioned at CHPC. Visit our Accounts page for more information on how to obtain a uNID and managing your account.

Logging in

After creating your account, connect to an HPC cluster. Here are a few ways to log in:  

  • Open OnDemand
    • Open OnDemand, which works in your web browser, includes many commonly used graphical applications; this is a great way to interact with the CHPC’s resources, especially if you are not yet comfortable with using a terminal.
  • SSH
    • SSH is a common method for logging into a Linux system if you are familiar with using a terminal.
  • FastX
    • FastX can support graphical software in a web browser. It can also be installed on your computer.

For more information, visit our Accessing CHPC Resources page.

Storing files

Most CHPC users will be interested in the following storage options:

  • Home directories. By default, every CHPC user has 50 GB of home directory space. Research groups may purchase home directory space, which includes backups to archive storage.
      Home directories are not backed up by default. Only purchased home directory space includes backups to the VAST storage system. More information about purchasing space and backups is available on our Storage page.
  • Group space. Group storage is available for purchase at CHPC for both General and Protected Environments. Shared storage, like the home directory space, is not designed for running jobs like the scratch space.
  • Scratch space. Scratch space is available to all users; there is no cost to use it. Scratch space is used for the storage of intermediate files while a job is running. Files are removed if they have not been accessed within 60 days. The scratch space is not backed up.

CHPC has dedicated data transfer nodes so you can transfer data at high speed.

Here are examples of different methods you can use to transfer data through the data transfer nodes:

  • Rclone. Command-line tool that lets you easily transfer and sync files between your computer and various cloud storage services, such as Google Drive, Box, Dropbox, and S3-compatible providers.
  • Globus. Provides tools for efficient and secure data transfers, enabling parallel, load-balanced, and fault-tolerant data movement.
  • SCP. Linux command-line tool used to securely transfer files on CHPC's interactive nodes or on a Data Transfer Node (DTN).

Accessing software

You can utilize modules to load or unload software needed to run your programs into the environment. Details on available software and usage instructions are available on our software page. 

Running a job with Slurm

Slurm is the cluster's job scheduling system. All large computational workloads are managed through Slurm. From the command line, you can use the command mychpc batch to see what Slurm resources you have access to.

Access our Slurm documentation to learn how to request computational resources from our clusters.

Options when submitting jobs

To conduct computational research on CHPC-owned resources on the granite, notchpeak, and redwood clusters, an allocation is required. To find out what resources you can access, type mychpc batch command in your terminal after logging in to a cluster. Older clusters (kingspeak and lonepeak) in the General Environment do not require allocation.

Research groups can also purchase nodes to add to a CHPC-managed cluster.

  If you are not sure which options to use when submitting Slurm jobs, try using our tool that describes accounts and partitions.

Training and support

We understand that our users come from different academic backgrounds and have different levels of experience with research computing. Regardless of whether you’re an undergraduate student participating in your first research experience or a faculty member already familiar with computational research—or if you find yourself anywhere in between—we’re here to support you.

For more information, visit our FAQ page.

Last Updated: 10/29/25