Skip to content

3.1 File Storage Policies

  1. CHPC Home Directory File Systems
    Many of the CHPC home directory file systems are based on NFS (Network File System), and proper management of files is critical to the performance of applications and the performance of the entire network. All files in home directories are NFS mounted from a fileserver, and a request for data must go over the network. Therefore, it is advised that all executables and input files be copied to a scratch directory before running a job on the clusters.
    1. The general CHPC home directory file system (CHPC_HPC) is available to users who have a CHPC account and do not have a department or group home direcotry file system maintained by CHPC (see item 2-2 below). This file system enforces quotas set at 50 GB per user. If you need a temporary increase on this limit, please let us know ( and we can increase it based on your needs. To apply for a permanent quota increase, the CHPC PI responsible for the user should contact CHPC ( and make a formal request. This request should include a justification for this increase. This file system is not backed up and users are encouraged to move important data back to a file system that is backed up, such as a department file server.
    2. Department or Group owned storage
      1. Departments or PIs with sponsored research projects can work with CHPC to procure storage to be used as CHPC Home Directory or Group Storage
      2. Home directory space purchases are limited to 1TB per group and include full backup as described in the Backup Policies below.
      3. The owner of group storage can arrange for archival back up as described in the Backup Policies below.
      4. Usage Policies of this storage will be set by the owning department/group
      5. When using shared infrastructure to support this storage it is still expected that all groups be 'good citizens'. By good citizens we mean that utilization should be moderate and not impact other users of the file server.
      6. Quotas
        1. User and or Group quotas can be used to control usage
        2. The quota layer will be enabled allowing for reporting of usage even if quota limits are not set
      7. Any backups run regularly by CHPC have a two week retention period - See Backup Policies below.
      8. Life Cycle
        1. CHPC will support storage for the duration of the warranty period.
        2. A 'best effort' will be applied to supporting storage beyond the warranty period.
        3. Factors that would contribute to the termination of 'best effort' support include, but are not limited to:
          1. General health of the device
          2. Potential impact of maintaining an unsupported device
          3. Ability to acquire and replace components
    3. Web Support from home directories
      1. Place html files in public_html directories
      2. URL published: "<uNID>"
      3. May request a more human readable URL handle to redirect to something like: "<my_name>"
  2. Backup Policies
    1. /scratch file systems are not backed up
    2. The HPC general file system is not backed up
    3. Owned home directory space:  At the present time, due to limitations on the capacity of CHPCs back up capabilities, CHPC limits the amount of owned home directory space being purchased to 1TB per group. The backup of this space is included in the price and includes:
      1. Full backup weekly
      2. Incremental backup daily
      3. Two week retention
    4. Archive Backup Service: While CHPC does NOT perform regular backups on the default HPC home directory space or any group spaces, we have recognized the needs of some groups to protect their data. CHPC has the ability to make periodic archive backups to tape of data for research groups. These archives can be no more frequent than once per quarter. Each research group is responsible for the cost of the tapes. Future archive backups can be made to the original tapes (more tapes may be needed if data set has grown), or a new tape purchase can be done with CHPC's assistance. To schedule this service, please:
      1. send email to
      2. purchase tapes (CHPC will assist you with tape requirements).
      3. CHPC will perform the archive backup to tape.
      4. tapes can be stored at CHPC or can be stored by the PI.
      5. CHPC suggests that group have two sets of tapes so that any time a full backup is being done to one set we still have the copy on the other set to protect us if the disaster were to happen mid archive.
        1. Period archive backups are not done automatically. If you request period archive backups, CHPC will send you a quarterly reminder that it is time to request a backup.
        2. These backups are made onto a high-density tape medium and are either retained in the CHPC backup library or can be delivered to the user for long-term storage at the user's discretion.
        3. CHPC wishes to make a note here that high-density tapes do have a limited life span and so should not be regarded as a reliable medium for archive recovery after a certain reasonable length of time. For this reason we are going to specify that after two years, any archive tapes which the user wishes to still maintain for possible recovery should be replaced with new media.
        4. CHPC can at that time furnish the user with a quote for the number of tapes which need to be updated, at which time we can make new duplicate tapes of the archive. which again may be stored for up to two years.
  3. Scratch Disk Space: Scratch space for each HPC system is architected differently. CHPC offers no guarantee on the amount of available /scratch disk space available at any given time.
    1. Local Scratch (/scratch/local):
      1. unique to each individual node and is not accessible from any other node. 
      2. cleaned aggressively: scrubbed weekly of files that have not been modified for over 7 days. 
      3. users are expected to clean this space (if used) at the end of every job. 
      4. there is no access to /scratch/local outside of a job. 
      5. this space will be the fastest, but not necessarily the largest. 
      6. users should use this space at their own risk.
      7. not backed up
    2. NFS Scratch:
      1. /scratch/kingspeak/serial mounted on all interactive nodes and on kingspeak, ash and ember compute nodes.
      2. /scratch/general/nfs1 mounted on all interactive nodes and on lonepeak compute nodes
      3. not intended for use as storage beyond the data's use in batch jobs
      4. scrubbed weekly of files that have not been accessed for over 60 days 
      5. each user will be responsible for creating directories and cleaning up after their jobs 
      6. not backed up
      7. quota layer enabled to facilitate usage reporting
    3. Parallel Scratch: (/scratch/general/lustre): 
      1. This general space is only available on all interactive nodes and on kingspeak, ash, and ember compute nodes
      2. scrubbed weekly of files that have not been accessed for over 60 days.
      3. not intended for use as storage beyond the data's use in batch jobs.
      4. not backed up.
      5. quota layer enabled to facilitate usage reporting
    4. Owner Scratch Storage
      1. configured and made available as per the owner groups requirements.
      2. not subject to the general scrub policies that CHPC enforces on CHPC provided scratch space.
      3. owners/groups can request automatic scrub scripts to be run per their specifications on their scratch spaces.
      4. not backed up.
      5. quota layer enabled to facilitate usage reporting.
      6. quota limits can be configured per owner/groups needs.
  4. File Transfer Services 3.2 Guest File Transfer Policy
Last Updated: 6/11/21