Jobs will be controlled through the batch system using Slurm.
- Node sharing. No node sharing.
- Allocations. Allocations will be handled through a block grant to the owner.
- Best effort to allocate nodes of same CPU speed.
- Max time limit is outlined in the job priority section below.
- Scheduling is set based on a current highest priority set for every job, with backfill enabled.
- Fairshare boost in priority at user level. Minimal boost to help users who haven't been running recently. Our Fairshare window is two weeks.
- Small relative to time small short jobs are given a boost for length of time in the queue as it relates to the wall time they have requested.
- Reward for parallelism. Set at the global level
- Special Reservations - upon request.
Partition Name Access Accounts Nodes Cores per node (total) Memory Features smithp-ash restricted smithp-ash, smithp-ash-cs,
ash[001-251] 12 (3012) 24 GB smithp,c12 ash[416-417] 12 (24) ash[252-415] 20 (3280) 64 GB smithp, c20 ash[418-465] 24 (1152) 128 GB smithp, c24 ash-guest all smithp-guest ash[001-251] 12 (3012) 24 GB smithp, c12 ash[416-417] 12 (24) ash[252-415] 20 (3280) 64 GB smithp, c20 ash[418-465] 24 (1152) 128 GB smithp, c24
- Job Priority: Majority of a job's priority will be set based on a quality of service definition
or QOS. The following initial QOS's to be defined:
QOS Priority Preempts Preempt Mode Flags GrpNodes MaxWall ash 1000 ash-guest cancel 417 24:00:00 cs-ash 1000 ash-guest cancel 417 24:00:00 js-ash 1000 ash-guest cancel 417 24:00:00 ash-guest 1 cancel NoReserve 417 24:00:00
- Interactive nodes. The owner group interactive nodes are ash1.chpc.utah.edu through ash4.chpc.utah.edu, and can also be accessed via ash.chpc.utah.edu. The interactive nodes for guest access are ash5.chpc.utah.edu and ash6.chpc.utah.edu and can also be accessed via ash-guest.chpc.utah.edu.