Skip to content

/scratch/general/lustre now available for use on ember and lonepeak

The /scratch/general/lustre file system is now accessible from all chpc interactive nodes (this includes the frisco, meteo, and atmos nodes) as well as the compute nodes of all clusters with the exception of tangent.  Due to the nature of tangent we cannot reach all nodes at the same time, and so we will be adding the mounts of this space as we have access to the individual compute nodes.

As a reminder, this scratch space has a capacity of 700TB. IT is a Lustre parallel distributed file system, that has a scalable storage architecture with three main components; the Metadata Server, Object Storage Server and clients. The Metadata Servers (MDS) provide metadata services for a file system and manages a Metadata Target (MDT) that stores the file metadata. The Object Storage Servers (OSS) manage the Object Storage Targets (OST) that store the file data objects.  A given file is “striped” across multiple OSTs, or broken into chunks and written across different sets of disks, which can results in performance benefits in terms of i/O performance, especially in the case of large jobs doing large amounts of simultaneous I/O from multiple processes in mpi or threaded jobs.  Most users will not need to change from the default settings of stripe width of 1 and stripe size of 1MB, however, if you would like to explore options for setting these to different value let us know as we are looking for test cases to work on the benchmarking and tuning of the new lustre file system.

I will again add that users should start to use this new scratch space instead of /scratch/kingspeak/serial as we will make /scratch/kingspeak/serial  read-only in about two weeks.  Users should also start to migrate any files that they need from /scratch/kingspeak to other locations.  Watch for upcoming announcements with specific dates and more details.

Please address and questions to issues@chpc.utah.edu

 

 

Last Updated: 6/10/21