Notchpeak, started in January 2018, is operated in a condominium fashion with some general CHPC nodes on which groups can get allocation, along with additional nodes owned by different research groups. All users have guest access to the owner nodes in a preemptible fashion.
The General CHPC nodes include:
- 3 GPU nodes each with 32 cores, 192GB memory, and three V100 GPUs. These are described in more detail on our GPU & Accelerators page.
- 15 dual socket nodes with 32 cores each
- 4 nodes with 96 GB memory
- 19 nodes with 192 GB memory
- 2 nodes with 768 GB memory
- 1 dual socket node with 36 cores, 768 GB memory
Other information on notchpeak's hardware and cluster configuration:
- Intel XeonSP (Skylake) processors
- Mellanox EDR Infiniband interconnect
- 2 general interactive nodes
- The Skylake nodes offer AVX-512 support. See our page on Single Executables for all CHPC Platforms for details on building applications to take advantage of this feature.
CHPC resources are available to qualified faculty, students (under faculty supervision), and researchers from any Utah institution of higher education. Users can request accounts for CHPC computer systems by filling out an account request form. This can be found by following this link: account request form.
Users requiring priority on their jobs may apply for an allocation of wall clock hours per quarter. Users will need to send a brief proposal, using the the allocation form found here.
The notchpeak cluster can be accessed via ssh (secure shell) at the following address:
All CHPC machines mount the same user home directories. This means that the user files on notchpeak will be exactly the same as the ones on other CHPC clusters. The advantage is obvious: users do not need to copy files between machines.
Notchpeak compute nodes mount the following scratch file systems:
- /scratch/ash/lustre (restricted access)
- /scratch/ucgd/serial (restricted access)
- /scratch/ucgd/lustre (restricted access)
As a reminder, the non-restricted scratch file systems are automatically scrubbed of files that have not been accessed for 60 days.
Your environment is setup through the use of modules. Please see the User Environment section of the General Cluster Information page for details in setting up your environment for batch and other applications.
The batch implementation on notchpeak is Slurm.
The creation of a batch script on the Notchpeak cluster
A shell script is a bundle of shell commands which are fed one after another to a
tcsh,..). As soon as the first command has successfully finished, the second command is
executed. This process continues until either an error occurs or the complete array
of individual shell commands has been executed. A batch script is a shell script which
defines the tasks a particular job has to execute on a cluster.
Below this paragraph a batch script example for running in Slurm on the notchpeak cluster is shown. The lines at top of the file all begin with #SBATCH which are interpreted by the shell as comments, but give options to Slurm.
Example Slurm Script for notchpeak:
#SBATCH --time=1:00:00 # walltime, abbreviated by -t
#SBATCH --nodes=2 # number of cluster nodes, abbreviated by -N
#SBATCH -o slurm-%j.out-%N # name of the stdout, using the job number (%j) and the first node (%N)
#SBATCH --ntasks=64 # number of MPI tasks, abbreviated by -n # additional information for allocated clusters
#SBATCH --account=baggins # account - abbreviated by -A
#SBATCH --partition=notchpeak # partition, abbreviated by -p # # set data and working directories
setenv WORKDIR $HOME/mydata
setenv SCRDIR /scratch/notchpeak/serial/UNID/myscratch
mkdir -p $SCRDIR
cp -r $WORKDIR/* $SCRDIR
# load appropriate modules, in this case Intel compilers, MPICH2
module load intel mpich2
# for MPICH2 over Ethernet, set communication method to TCP - for general lonepeak nodes
# see above for network interface selection options for other MPI distributions
setenv MPICH_NEMESIS_NETMOD tcp
# run the program
# see above for other MPI distributions
mpirun -np $SLURM_NTASKS my_mpi_program > my_program.out
For more details and example scripts please see our Slurm documentation. Also, to help with specifying your job and instructions in your slurm script, please review CHPC Policy 2.1.6 notchpeak Job Scheduling Policy.
Job Submission on Notchpeak
In order to submit a job on notchpeak one has to login first into a notchpeak interactive node. Note that this is a change from the way job submission has worked in the past on our other clusters where you could submit from any interactive node to any cluster.
To submit a script named slurmjob.script, just type:
To check the status of your job, use the "squeue" command
For information on compiling on the clusters at CHPC, please see our Programming Guide.