Skip to content

Lonepeak User Guide

Lonepeak has an unallocated pool of 194 nodes for general use, where jobs are not preempted. Also there are 21 owner nodes which are available to run on in preemtable fashion, using the owner-guest account along with 21 general;ly available gpu nodes, each with 8 1080TI GPUs.

Contents

Lonepeak Cluster Hardware (General) Overview

  • 142 nodes with 12 cores (Intel Xeon X5650 - Westmere), 2.67 GHz, 96 GB RAM , various sizes of /scratch/local
  • 48 nodes with 24 cores (Intel Xeon E5-2697 v2  -  IvyBridge), 2.70GHz, 256 GB RAM, various sizes of /scratch/local
  • 4 nodes with 32 cores (Intel Xeon  X7560 - Nehalem) , 2.27 GHz, 1TB RAM, 1.1 TB /scratch/local space -- added 10 Feb 2020
  • 21 GPU nodes  with 8 Nvidia 1080 TI GPUs,  16 CPU cores (Intel Xeon E5-2620 v4 - Broadwell, 2.1 GHz, 64 GB RAM, 1.8 TB /scratch/local
  • Ethernet interconnect  - no infiniband
  • 2 general interactive nodes 

This cluster also has 21 (416 total cores) owner nodes.  For details on these nodes and access please see 2.1.5 Lonepeak Job Scheduling Policy.  The current set of owner nodes are:

  • 20 nodes with 20 cores (Intel Xeon E5-2640 v4 - Broadwell) and 64 GB RAM
  • 1 node with 16 cores (AMD Opteron 6276) and 128 GB RAM

Interactive Nodes

While the compute nodes on lonepeak have large memory available, the interactive nodes, accessed via ssh lonepeak.chpc.utah.edu do not. Please do not run jobs on these interactive nodes. The interactive node policy is available at 2.1.1 Cluster Interactive Node Policy.

Important Differences from Other CHPC Clusters – NEW! 

  • No allocations are required to use this cluster; see scheduling: 2.1.5 Lonepeak Job Scheduling Policy
  • This cluster does not have any infiniband network
  • The general nodes are of relatively old Nehalem CPU generation (compiler optimization flags: Intel -xSSE4.2, PGI -tp=nehalem), while the owner nodes are of the relatively recent Broadwell CPU generation (compiler optimization flags: Intel -xCORE-AVX2, PGI -tp=haswell). We recommend to build a single binary with optimizations for each of these platforms, as described here.

FAQ section - NEW!

Lonepeak Cluster Usage

CHPC resources are available to qualified faculty, students (under faculty supervision), and researchers from any Utah institution of higher education. Users can request accounts for CHPC computer systems by filling out an account request form. This can be found by following this link: account request form.  No allocation is required to use this cluster.

Lonepeak Cluster Access and environment

The lonepeak cluster can be accessed via ssh (secure shell) at the following address:

  • lonepeak.chpc.utah.edu

All CHPC machines mount the same user home directories. This means that the user files on Lonepeak will be exactly the same as the ones on other CHPC clusters. The advantage is obvious: users do not need to copy files between machines. 

Lonepeak compute nodes mount the following scratch file systems:

  • /scratch/general/nfs1
  • /scratch/general/vast

As a reminder, the non-restricted scratch file systems are automatically scrubbed of files that have not been accessed for 60 days.

For more details on setting up your environment for batch, please see General Cluster Information: User Environment.

Using the Batch System on Lonepeak

The batch implementation on Lonepeak, is Slurm.

The creation of a batch script on the Lonepeak cluster

A shell script is a bundle of shell commands which are fed one after another to a shell (bashtcsh,..). As soon as the first command has successfully finished, the second command is executed. This process continues until either an error occurs or the complete array of individual shell commands has been executed. A batch script is a shell script which defines the tasks a particular job has to execute on a cluster.

Below this paragraph a batch script example for running in Slurm on the Lonepeak cluster is shown. The lines at top of the file all begin with #SBATCH which are interpreted by the shell as comments, but give options to Slurm.

Example Slurm Script for Lonepeak:

#!/bin/csh

#SBATCH --time=1:00:00 # walltime, abbreviated by -t
#SBATCH --nodes=2 # number of cluster nodes, abbreviated by -N
#SBATCH -o slurm-%j.out-%N # name of the stdout, using the job number (%j) and the first node (%N)
#SBATCH --ntasks=24 # number of MPI tasks, abbreviated by -n # additional information for allocated clusters
#SBATCH --account=baggins # account - abbreviated by -A
#SBATCH --partition=lonepeak # partition, abbreviated by -p # # set data and working directories

setenv WORKDIR $HOME/mydata

setenv SCRDIR /scratch/general/nfs1/$USER/$SLURM_JOB_ID
mkdir -p $SCRDIR
cp -r $WORKDIR/* $SCRDIR
cd $SCRDIR

# load appropriate modules, in this case Intel compilers, MPICH2
module load intel mpich2
# for MPICH2 over Ethernet, set communication method to TCP - for general lonepeak nodes
# see above for network interface selection options for other MPI distributions
setenv MPICH_NEMESIS_NETMOD tcp
# run the program
# see above for other MPI distributions
mpirun -np $SLURM_NTASKS my_mpi_program > my_program.out

For more details and example scripts please see our Slurm documentation. Also, to help with specifying your job and instructions in your slurm script, please review CHPC Policy 2.1.5 Lonepeak Job Scheduling Policy.

Job Submission on Lonepeak

In order to submit a job on lonepeak one has to login first into a lonepeak interactive node. Note that this is a change from the way job submission has worked in the past on our other clusters where you could submit from any interactive node to any cluster. 

To submit a script named slurmjob.lonepeak, just type:

sbatch slurmjob.lonepeak

Checking the status of your job in slurm

To check the status of your job, use the "sinfo" command

sinfo

For information on compiling on the clusters at CHPC, please see  our Programming Guide

Last Updated: 7/5/23