CHPC ‘s new general environment cluster notchpeak is now ready for general use.
As with the other clusters, there are two interactive or login nodes, notchpeak1.chpc.utah.edu and notchpeak2.chpc.utah.edu; use of notchpeak.chpc.utah.edu will round robin between these two nodes. The same usage policy as on our other cluster interactive nodes applies – do not run any processes that are resource intensive (cores, memory, I/O) directly on these login nodes.
To use the general nodes, use partition set to notchpeak and the account set to your PI group account.
Applications that run on kingspeak should run on notchpeak with no need for rebuilding. However, the notchpeak nodes do have AVX512 support, and to take advantage of this instruction set you will have to rebuild. The CHPC documentation on code installation will be updated to add information on this.
- Nodes are based on Intel XeonSP (Skylake) processors and have 32 cores per node
- 11 general compute nodes with 192GB memory
- 4 general compute nodes with 96 GB memory
- There are three GPU nodes each have three Nvidia v100 GPUs
- Mellanox EDR infiniband interconnect
For the remainder of this quarter, through March 31, notchpeak will be run in an unallocated mode (as lonepeak is run). Starting April 1, jobs on notchpeak general compute nodes will be using the general allocations along with jobs on the kingspeak general compute nodes and the ember general compute nodes will change to be run in an unallocated mode.
As always, if you have any questions please let us know. Email firstname.lastname@example.org if you have issues.