Skip to content

Announcing new scratch file system: /scratch/general/lustre

Date Posted: March 8th, 2016

CHPC now has a new scratch file system /scratch/general/lustre available for use on kingspeak and ash.  This is a lustre parallel file system with a capacity of 700TB. At this time we would like to open the use of this scratch file system to all users. 

We are working on having this file system available on the other clusters and will let you know when this is in place.

We request that you start to use this new scratch file system as soon as possible.  Once it is available on all systems we will start the process of draining /scratch/kingspeak/serial by making it read only in order to rebuild the file system to fix a hardware issue.

Lustre is a parallel distributed file system, commonly used with large scale computing clusters.  It is a scalable storage architecture with three main components; the Metadata Server, Object Storage Server and clients. The Metadata Servers (MDS) provide metadata services for a file system and manages a Metadata Target (MDT) that stores the file metadata. The Object Storage Servers (OSS) manage the Object Storage Targets (OST) that store the file data objects.

A given file is “striped” across multiple OSTs, or broken into chunks and written across different sets of disks, which can results in performance benefits in terms of i/O performance, especially in the case of large jobs doing large amounts of simultaneous I/O from multiple processes in mpi or threaded jobs.

Most users will not need to change from the default settings of stripe width of 1 and stripe size of 1MB, however, if you would like to explore options for setting these to different value let us know as we are looking for test cases to work on the benchmarking and tuning of the new lustre file system.

Please let us know  if you have any questions or problems let us know by sending a report to helpdesk@chpc.utah.edu

 

 
Last Updated: 6/11/21