Skip to content

Getting Started at the Center for High Performance Computing

Welcome to the Center for High Performance Computing (CHPC) at the University of Utah, the university's central hub for research computing and data. This page provides a simple, step-by-step guide designed specifically for new users who want to leverage CHPC's diverse computing resources to advance their research. Whether you're new to high-performance computing or just new to CHPC, this guide will help you get started.

1. Accounts

To get started, apply for a CHPC account using our online application form

  Note: You must be provisioned at the University of Utah campus level with a valid, active uNID (University of Utah ID number) in order to be provisioned at CHPC. Visit our Accounts page for more information on how to obtain a uNID and managing your account.

2. Logging in

After creating your account, connect to a cluster. Here are a few ways to log in:  

  • SSH
    • SSH is a common method for logging into a node if you are familiar with using a command line.
  • FastX
    • Similar to the SSH method however FastX can support graphical software, in a web browser or it can be installed on your computer.
  • Open OnDemand
    • Open OnDemand, which works in your web browser, includes many commonly used graphical applications; this is a great way to interact with the CHPC’s resources, especially if you are not yet comfortable with the command line.
  • Narwhal
    • Using a remote desktop connection, Narwhal allows users with access to the PE (protected environment) to use files and desktop applications. 

For more information visit our Accessing CHPC Resources page.

3. Storing Files

After logging in, store your files within your CHPC account to utilize them on CHPC's clusters.

CHPC provides the following storage options:

  • Home directories. By default, every CHPC user has 50 GB of home directory space. Research groups may purchase home directory space, which includes backups on pandos.
      Home directories are not backed up by default. Only purchased home directory space includes backups to the VAST storage system. More information about purchasing space and backups is available on our Storage page.
  • /scratch/local/ Scratch space is available to all users; there is no cost to use it. Scratch space is used for the storage of intermediate files while a job is running. Files are removed if they have not been accessed within 60 days. The scratch space is not backed up.
  • /tmp and /var/tmp. CHPC cluster nodes use RAM disks for /tmp and /var/tmp, offering limited space. For larger temporary storage needs, utilize the local disk at /scratch/local by setting the TMPDIR environment variable. This provides significantly more space than the default RAM disks

To learn more about our group storage services visit our Storage page.

4. Accessing Software

Now that your files are stored, you can utilize CHPC's extensive software library. Details on available software and usage instructions are available on our Software page.

5. Running a Job on Slurm

SLURM (Simple Linux Utility for Resource Management) is the cluster's job scheduling system. All program executions are managed through SLURM. Access our SLURM documentation to learn how to write scripts for scheduling your job to the cluster.

6. Knowing your Allocation

To conduct research on the Granite and Notchpeak clusters at the CHPC, an allocation is required. To find out what allocation you have type 'mychpc batch' command in your terminal after logging in to a cluster. We recommend using the clusters in the general environment as these clusters can be used without an allocation.

Training and support

We understand that our users come from different academic backgrounds and have different levels of experience with research computing. Regardless of whether you’re an undergraduate student participating in your first research experience or a faculty member already familiar with computational research—or if you find yourself anywhere in between—we’re here to support you.

For more information visit our FAQ page

 

Last Updated: 1/24/25