You are here:

2.1 General HPC Cluster Policies

2.1.1 Cluster Interactive Node Policy

  1. The interactive nodes are the front end interface systems for access to the HPC clusters. Each cluster has a set of interactive nodes associated with them. For example if you ssh'd into "ember.chpc.utah.edu," you will actually be connected to one of the interactive ember nodes, such as ember1 or ember2. For ash, the interactive nodes accessed via  "ash.chpc.utah.edu" are restricted to the groups that have allocation to run on this cluster. The interactive nodes for guests are ash5 and ash6, and they can be accessed either by specifying the specific node or by using "ash-guest.chpc.utah.edu".
  2. Interactive nodes are your interface with the computational nodes and are where you interact with the batch system. Please see our User Guides for details. Processes run directly on these nodes should be limited to tasks such as editing, data transfer and management, data analysis, compiling codes and debugging, as long as it is not resource intensive (memory, cpu, network and/or i/o). Any resource intensive work must be run on the compute nodes through the batch system.
  3. Any process that is consuming extensive resources on the interactive node may be killed, especially when it begins to impact other users on that node.
    1. CHPC will usually allow up to 15 minutes CPU time before considering killing a process, unless the resource usage is impacting other users. If the process is creating significant problems on the system, the process will be killed immediately and the user will be contacted via email.
    2. Owners of the process are notified via a tty message (if possible) and an email message is sent when the process is killed.
    3. Repeated abuse of interactive nodes may result in notification of your PI and potentially locking your account.
  4. The scratch spaces that are visible on the compute nodes of the clusters are mounted on the interactive nodes.
    1. Migrate your data using the interactive nodes to one of the scratch spaces. Run your i/o intensive batch work from the scratch space, NOT from your home directory or group space. Note: an i/o intensive process could either be excessive MB/second or excessive i/o operations/second.
    2. For storage polices please see 3.1 File Storage Policies.

2.1.2 Batch Policies (all clusters)

General Queuing Polices

  • Users who have exhausted their allocation will not have their jobs dispatched unless there are available free cycles on the system. Preemption rules are determined on each cluster - see the cluster's scheduling polices (links below).
  • THere is a maximum number of nodes allowed per job set at approximately 1/2 of the general nodes on each cluster.
  • Special access is given to a long qos to exceed the MAX walltime limit on a case-by-case basis. Please send any request for access to this qos to helpdesk@chpc.utah.edu with an explanation (job cannot be run under regular limits with checkpointing/restarts/more nodes). There is a limit of two nodes running with this qos at any given time. 
  • Exceptions, again on a case-by-case basis, can also be made to the maximum number of nodes allowed per job. Please contact chpc via helpdesk@chpc.utah.edu with an explanation of your need for an exception.
  • Each of the computational clusters has its own set of scheduling policies pertaining to job limits, access and priorities. Please see the appropriate policy for the details of any particular cluster.

Reservations

Users may request to reserve nodes for special circumstances. The request must come from the PI/Faculty advisor of the user's research group and the group's allocation must be sufficient to cover the duration of the reservation. A reservation may be shared by multiple users. The maximum number of nodes allowed for a reservation is half the number of general nodes for the cluster the user is asking for. The maximum duration for reservations is two weeks. The PI/Faculty advisor should send a request to issues@chpc.utah.edu with the following information:

  • Which cluster
  • Number of Nodes/Cores
  • Starting date and time (please ask ahead by at least the MAX walltime for that cluster)
  • Duration
  • User or Users on the reservation
  • Any special requirements (longer MAX walltime for example)

Owner-Guest

Owner-guest access is enabled on owner nodes on ember, kingspeak,  and lonepeak as well as on ash. Jobs run in this manner are preemptable. To do so, you will need to specify a special account. Jobs using these preemption accounts will not count against your group's allocation. Jobs run in this manner should not use /scratch/local as this scratch space will not be cleaned when job is preempted nor is it accessible by the user running the guest job to retrieve any needed files.

To specify the owner-guest, use the partition cluster-guest (using the appropriate cluster name) and the account owner-guest; on ash the account for guest jobs is smithp-guest. Your job will be preempted if a job comes in from a user from the group of the owner whose node(s) your job received.

You can also target a specific owner group by using the PI name in your resource specification by using the slurm constraint definition You can also target a specific core count by using constraint with the core count (specified as c#) or memory amount by using the slurm batch directive to specify memory.  The use of these are described on the Slurm documentation page.

2.1.3 Ember Job Scheduling Policy

Job Control

Jobs will be controlled through the batch system using SLURM.

  1. Node sharing. No node sharing.
  2. Allocations. Allocations will be handled through the regular CHPC allocation committee. Allocations on owner nodes will be at the direction of node owners.
  3. Best effort to allocate nodes of same CPU speed.
  4. Max time limit for jobs  for the general nodes is 72 hours.
  5. Max nodes per job on the general nodes is set to approximately 1/2 of the total number of general nodes.
  6. Scheduling is set based on a current highest priority set for every job. We do have backfill enabled.
  7. Fairshare boost in priority at user level. Minimal boost to help users who haven't been running recently. Our Fairshare window is two weeks.
  8. Small relative to time small short jobs are given a boost for length of time in the queue as it relates to the wall time they have requested.
  9. Reward for parallelism. Set at the global level.
  10. Partition settings
     Partition Name   Access  Accounts  Node/core count   Memory  Features  Node specification 
     ember  all  <pi>  73/876  24576  chpc, general, c12  em[019-022,075-142,144]
     ember-gpu  by request (GPUs)   ember-gpu  11/132 + 2 GPU per node  49152  chpc, c12, gpu, nv2090  em[001-008, 010-012]
     ember-freecycle  all  all    24576  chpc, c12  em[019-022,075-142,144] 
     ember-guest  all  all  sum of owners  see owner nodes      em[019-022,075-142,144]
    <owner>-em  restricted  <owner>-kp, owner-guest  use sinfo to see details  use sinfo to see details <owner>, core count  em[019-022,075-142,144]
     Total      161/2292      
  11. QOS Settings The majority of a job's priority is based on a quality of service definition or QOS. The following QOS's are defined:
     QOS  Priority  Preempts  Preempt Mode  Flags  GrpNodes  MaxWall
     ember  100000  ember-freecycle   cancel    73  3-00:00:00 (3 days)
     ember-freecycle  10000    cancel  NoReserve   73  3-00:00:00
     ember-guest  10000    cancel  NoReserve  73  3-00:00:00
     ember-long  100000  ember-guest  cancel    73  14-00:00:00 (14 days)
     <owner>-em  100000  ember-guest  cancel    12  14-00:00:00
  12. Interactive nodes. For general use there is ember1.chpc.utah.edu and ember2.chpc.utah.edu.  Access either via ember.chpc.utah.edu. There are also owner interactive nodes that are restricted to the owner group.

2.1.4 Kingspeak Job Scheduling Policy

Job Control

Jobs will be controlled through the batch system using SLURM.

  1. Node sharing. Node sharing on GPUs and on owner nodes if requested by owner group
  2. Allocations. Allocation on general nodes will be handled through the regular CHPC allocation committee. Allocations on owner nodes will be at the direction of node owners.
  3. Best effort to allocate nodes of same CPU speed
  4. Max time limit for jobs is 72 hours (on general nodes)
  5. Max nodes per job on the general nodes is set to approximately 1/2 of the total number of general nodes.
  6. Scheduling is set based on a current highest priority set for every job. We do have backfill enabled.
  7. Fairshare boost in priority at user level. Minimal boost to help users who haven't been running recently. Our Fairshare window is two weeks.
  8. Small relative to time small short jobs are given a boost for length of time in the queue as it relates to the wall time they have requested.
  9. Reward for parallelism. Set at the global level.
    1. Partitions
       Partition Name  Access  Accounts  Node/core count  Memory  Features  Node specification
       kingspeak  all  <pi> 32/512
      12/240
       4/80

       64GB;384GB

       

       chpc, general, core count   kp[001-032,110-111,158-167,196-199]
      kingspeak-gpu by request kingspeak-gpu See gpu page for details kp[297-300]
       kingspeak-freecycle  all  all  32/512
       12/240
       4/80
       65536, 393216  chpc, general, c16, c20  kp[001-032,110-111,158-167,196-199] 
       kingspeak-guest  all  all  sum of owners  see owner nodes   <owner>, core count kp[033-099,101-108,112-157,168-195,200-296,301-358,363-388
      <owner>-kp  restricted  <owner>-kp, owner-guest use sinfo to see details use sinfo to see details <owner>, core count  kp[033-099,101-108,112-157,168-195,200-296,301-358,363-388] 
      <owner>-gpu-kp restricted <owner>-gpu-kp See gpu page for details  kp[359-362]
       kingspeak-gpu-guest  by request  kingspeak-gpu-guest See gpu page for details
        
       kp[359-362]
  10. Job priorities

    The majority of a job's priority is based on a quality of service definition or QOS. The following  QOS's are defined:

     QOS Priority  Preempts  Preempt Mode  Flags   GrpNodes   MaxWall 
     kingspeak  100000  kingspeak-freecycle   cancel    48  3-00:00:00 (3 days) 
     kingspeak-freecycle   10000    cancel  NoReserve   48  3-00:00:00 
     kingspeak-guest  10000     cancel  NoReserve  all owner nodes  3-00:00:00
     kingspeak-long  100000  kingspeak-freecycle  cancel    48  14-00:00:00 (14 days) 
     <owner>-kp  100000  kingspeak-guest  cancel  varies

     all owner nodes;

    use sinfo command to

    see per owner

    set by owner

    default is 14-00:00:00

  11. Interactive nodes. For general use there are two interactive nodes, kingspeak1.chpc.utah.edu and kingspeak2.chpc.utah.edu.  Access either via kingspeak.chpc.utah.edu. There are also owner interactive nodes that are restricted to the owner group.

2.1.5 Lonepeak Job Scheduling Policy

Job Control

    Jobs will be controlled through the batch system using Slurm.
    1. Node sharing. No node sharing.
    2. Allocations. This cluster is completely unallocated.  All jobs on the general resources will run without allocation and without preemption.
    3. Max time limit for jobs will is 72 hours.
    4. Max nodes per job on the general nodes is set to approximately 1/2 of the total number of general nodes.
    5. Fairshare will be turned on.
    6. Scheduling is set based on the current highest priority set for every job.
    7. Reservations are allowed for users who show a need for the large memory available on these nodes.  Reservations will be for a maximum window of two weeks and a maximum of 50% of the general nodes on the cluster will be allowed to be reserved at any given time. Reservations will be made on a first come first serve basis. Up to 96 hours may be needed for the reservation to start.
    8. Partitions
      Partition Name  Access   Accounts   Node/core count   Memory  Features    Node specification 
       lonepeak  all  <pi> 

      8/160

      59/708

      32/256

      Varies, but most either 48 or 72GB per node

          lp[009-016]

      lp[001-008,033-063,067,073-074,077-078,088,091-092,097-105,107-109,111-112]

      lp[018-032,064-066,068-072,075-076,080,083-087,093,095-096]

       lonepeak-guest  all  all  sum of owners  see owner nodes        Varies
       lonepeak-freecycle  not in use   not in use          
      <owner>-lp  restricted  <owner>-lp, owner-guest 20/400  64 GB     <owner>,

      lp[113-132]

    9. QOS Settings:

      The majority of a job's priority will be set based on a quality of service definition or QOS.

      QOS  Priority   Preempts   Preempt Mode   Flags   GrpNodes  MaxWall
       lonepeak  100000    cancel    99  3-00:00:00 (3 days)
       lonepeak-freecycle   not in use
       
             
       lonepeak-guest  10000     cancel  NoReserve  20  3-00:00:00
       lonepeak-long  100000     cancel      14-00:00:00  (14 days) 
       <owner>-lp  100000  lonepeak-guest  cancel  varies

       20 

      set by owner

      default is 14-00:00:00

    10. Interactive nodes. For general use there are two interactive nodes, lonepeak1.chpc.utah.edu and lonepeak2.chpc.utah.edu. Access either via lonepeak.chpc.utah.edu.  There are also owner interactive nodes that are restricted to the owner group.

 2.1.6 Notchpeak Job Scheduling Policy

Job Control

Jobs will be controlled through the batch system using SLURM.

  1. Node sharing. Node sharing on GPUs and for owner jobs on owner nodes if requested by owner group
  2. Allocations. Allocation on general nodes will be handled through the regular CHPC allocation committee. Allocations on owner nodes will be at the direction of node owners.
  3. Best effort to allocate nodes of same CPU speed
  4. Max time limit for jobs is 72 hours (on general nodes).
  5. Max nodes per job on the general nodes is set to approximately 1/2 of the total number of general nodes.
  6. Scheduling is set based on a current highest priority set for every job. We do have backfill enabled.
  7. Fairshare boost in priority at user level. Minimal boost to help users who haven't been running recently. Our Fairshare window is two weeks.
  8. Small relative to time small short jobs are given a boost for length of time in the queue as it relates to the wall time they have requested.
  9. Reward for parallelism. Set at the global level.
    1. Partitions
       Partition Name  Access  Accounts  Node/core count  Memory  Features  Node specification
       notchpeak  all  <pi>  14/448

       96GB, 192GB

       chpc, general, c32  notch[004-018]
       notchpeak-gpu by request notchpeak-gpu See gpu page for details  notch[001-003]
       notchpeak-freecycle  all  all  18/576 96GB, 192GB  chpc, general, c32  notch[001-018]
       notchpeak-guest  all  all  sum of owners  see owner nodes    Varies use sinfo to see details
       <owner>-np  restricted  <owner>-np, owner-guest use sinfo to see details use sinfo to see details <owner>, core count  use sinfo to see details
  10. Job priorities

    The majority of a job's priority is based on a quality of service definition or QOS. The following  QOS's are defined:

     QOS Priority  Preempts  Preempt Mode  Flags   GrpNodes   MaxWall 
     notchpeak  100000  notchpeak-freecycle   cancel    18  3-00:00:00 (3 days) 
     notchpeak-freecycle   10000    cancel  NoReserve   18  3-00:00:00 
     notchpeak-guest  10000     cancel  NoReserve  all owner  3-00:00:00
     notchpeak-long  100000  notchpeak-freecycle  cancel    18  14-00:00:00 (14 days) 
     <owner>-np  100000  notchpeak-guest  cancel  varies

     use sinfo  to

    see details

    set by owner

    default is 14-00:00:00

  11. Interactive nodes. For general use there are two interactive nodes, notchpeak1.chpc.utah.edu and notchpeak2.chpc.utah.edu.  Access either via notchpeak.chpc.utah.edu. There are also owner interactive nodes that are restricted to the owner group.
Last Updated: 4/5/18