Skip to content

3.1 File Storage Policies

  1. CHPC Home Directory File Systems
    Many of the CHPC home directory file systems are based on NFS (Network File System), and proper management of files is critical to the performance of applications and the performance of the entire network. All files in home directories are NFS mounted from a fileserver, and a request for data must go over the network. Therefore, it is advised that all executables and input files be copied to a scratch directory before running a job on the clusters.
    1. The general CHPC home directory file system (CHPC_HPC) is available to users who have a CHPC account and do not have a department or group home direcotry file system maintained by CHPC (see item 2-2 below). This file system enforces quotas set at 50 GB per user. If you need a temporary increase on this limit, please let us know (helpdesk@chpc.utah.edu) and we can increase it based on your needs. To apply for a permanent quota increase, the CHPC PI responsible for the user should contact CHPC (helpdesk@chpc.utah.edu) and make a formal request. This request should include a justification for this increase. This file system is not backed up and users are encouraged to move important data back to a file system that is backed up, such as a department file server.
    2. Department or Group owned storage
      1. Departments or PIs with sponsored research projects can work with CHPC to procure storage to be used as CHPC Home Directory or Group Storage
      2. Home directory space purchases  include full backup as described in the Backup Policies below.
      3. The owner of group storage can arrange for archival back up as described in the Backup Policies below.
      4. Usage Policies of this storage will be set by the owning department/group.
      5. When using shared infrastructure to support this storage it is still expected that all groups be 'good citizens'. By good citizens we mean that utilization should be moderate and not impact other users of the file server.
      6. Quotas
        1. User and or Group quotas can be used to control usage
        2. The quota layer will be enabled allowing for reporting of usage even if quota limits are not set
      7. Any backups of owner home directory space run regularly by CHPC have a two week retention period - See Backup Policies below.
      8. Life Cycle
        1. CHPC will support storage for the duration of the warranty period.
        2. A 'best effort' will be applied to supporting storage beyond the warranty period.
        3. Factors that would contribute to the termination of 'best effort' support include, but are not limited to:
          1. General health of the device
          2. Potential impact of maintaining an unsupported device
          3. Ability to acquire and replace components
    3. Archive storage
      1. XX
    4. Web Support from home directories
      1. Place html files in public_html directories
      2. URL published: "http://home.chpc.utah.edu/~<uNID>"
      3. May request a more human readable URL handle to redirect to something like: "http://www.chpc.utah.edu/~<my_name>"
  2. Backup Policies
    1. /scratch file systems are not backed up
    2. The HPC general file system is not backed up
    3. Group spaces are not backed up by CHPC unless the group requests backup and purchases the necessary space on the CHPC archive storage system (see 2-5 below). Note that CHPC documenation also provides information about user driven backup options on our storage page.
    4. Owned home directory space:  The backup of this space is included in the price and includes:
      1. Full backup weekly
      2. Incremental backup daily
      3. Two week retention
    5. Archive Backup Service: While CHPC does NOT perform regular backups on the default HPC home directory space or any group spaces, we have recognized the needs of some groups to protect their data. CHPC has the ability to make quarterly archives. Each research group is responsible for the cost of the archive space required for this backup.  Typically, this service requires tge purchase of archive storage that is twice the capacity of the group space beign backed up.  To schedule this service, please:
      1. send email to helpdesk@chpc.utah.edu
      2. purchase  necessary archive space
      3. CHPC will perform the archive backup
      4. CHPC suggests that the archive space be twice the capacity of the group space being archves such that we still have a copyof the previous backup for protection if the disaster were to happen mid archive run.
      5. **DISCLAIMER ON ARCHIVE BACKUPS**
        1. XX
  3. Scratch Disk Space: Scratch space for each HPC system is architected differently. CHPC offers no guarantee on the amount of available /scratch disk space available at any given time.
    1. Local Scratch (/scratch/local):
      1. This space is on the local hard drive of the node and therefore is unique to each individual node and is not accessible from any other node. 
      2. This space is encrypted; each time the node is rebooted the encryption is reset tfrom scratch, which in effect purges the content of this space. 
      3. /scratch/local on compute nodes is set such that users cannot create a directory under the top level /scratch/local space.
        1. As part of the slurm job prolog (before the job is started), a job level directory,  /scratch/local/$USER/$SLURM_JOB_ID , is created and set such that only the job owner has access to the directory. At the end of the job, in the slurm job epilog, this directory is deleted.
      4. There is no access to /scratch/local outside of a job. 
      5. This space will be the fastest, but not necessarily the largest. 
      6. Users should use this space at their own risk.
      7. This space is not backed up
    2. NFS Scratch:
      1. /scratch/kingspeak/serial mounted on all interactive nodes and on kingspeak, ash and ember compute nodes.
      2. /scratch/general/nfs1 mounted on all interactive nodes and on lonepeak compute nodes
      3. not intended for use as storage beyond the data's use in batch jobs
      4. scrubbed weekly of files that have not been accessed for over 60 days 
      5. each user will be responsible for creating directories and cleaning up after their jobs 
      6. not backed up
      7. quota layer enabled to facilitate usage reporting
    3. Parallel Scratch: (/scratch/general/lustre): 
      1. This general space is only available on all interactive nodes and on kingspeak, ash, and ember compute nodes
      2. scrubbed weekly of files that have not been accessed for over 60 days.
      3. not intended for use as storage beyond the data's use in batch jobs.
      4. not backed up.
      5. quota layer enabled to facilitate usage reporting
    4. Owner Scratch Storage
      1. configured and made available as per the owner groups requirements.
      2. not subject to the general scrub policies that CHPC enforces on CHPC provided scratch space.
      3. owners/groups can request automatic scrub scripts to be run per their specifications on their scratch spaces.
      4. not backed up.
      5. quota layer enabled to facilitate usage reporting.
      6. quota limits can be configured per owner/groups needs.
  4. File Transfer Services 3.2 Guest File Transfer Policy
Last Updated: 8/19/21