Skip to content

Ash Cluster Hardware Overview

  • 164 20 Core Nodes and 48 24 Core Nodes (4432 total cores)
  • 64 Gbytes memory per node on the 20 core nodes
  • 128Gbytes memory per node on the 24 core nodes
  • Mellanox FDR Infiniband interconnect
  • Gigabit Ethernet interconnect for management

Ash Usage

The use of the Ash system is limited to members of the CHPC group "smithp".  The ash cluster was purchased for the use of the researchers working with Phil Smith. However, other CHPC users may access this cluster similar to how that access other owner nodes; with a guest account. On the other HPC clusters, this account is "owner-guest". On the Ash cluster, use "smithp-guest". 

All users of CHPC resources are available to qualified faculty, students (under faculty supervision), and researchers from any Utah institution of higher education. Users can request accounts for CHPC computer systems by filling out an account request form. This can be found by following this link: account request form.  

Since Ash is not under any kind of allocation system, there is no need to apply for an allocation.

Ash Access and environment

The ash cluster can be accessed via ssh (secure shell) at the following address (for users in Professor Phil Smith's group):

  • ash.chpc.utah.edu

If you are not in Prof. Smith's group, and wish to run as smithp-guest, you should use the interactive nodes at the following address:

  • ash-guest.chpc.utah.edu

All CHPC machines mount the same user home directories. This means that the user files on Ash will be exactly the same as the ones on other CHPC clusters (except that there are a few additional filesystems limited to the smithp group). The advantage is obvious: users do not need to copy files between machines.

Ash compute nodes mount the following scratch file systems:

  • /scratch/general/nfs1
  • /scratch/general/vast

As a reminder, the non-restricted scratch file systems are automatically scrubbed of files that have not been accessed for 60 days.

To update your environment, use the modules software. See the the Cluster Guides, User Environment section. For more information, refer to the modules documentation.

Using the Batch System on Ash

The batch implementation on all Ash is Slurm

The creation of a batch script on the Ash cluster

A shell script is a bundle of shell commands which are fed one after another to a shell (bashtcsh,..). As soon as the first command has successfully finished, the second command is executed. This process continues until either an error occurs or the complete array of individual shell commands has been executed. A batch script is a shell script which defines the tasks a particular job has to execute on a cluster.

Below this paragraph a batch script example for running in Slurm on the Ash cluster is shown. The lines at top of the file all begin with #SBATCH which are interpreted by the shell as comments, but give options to Slurm.

Example Slurm Script for Ash:

#!/bin/csh

#SBATCH --time=1:00:00 # walltime, abbreviated by -t
#SBATCH --nodes=2 # number of cluster nodes, abbreviated by -N
#SBATCH -o slurm-%j.out-%N # name of the stdout, using the job number (%j) and the first node (%N)
#SBATCH --ntasks=16 # number of MPI tasks, abbreviated by -n # additional information for allocated clusters
#SBATCH --account=smithp-guest # account - abbreviated by -A
#SBATCH --partition=ash-guest # partition, abbreviated by -p # # set data and working directories

setenv WORKDIR $HOME/mydata

setenv SCRDIR /scratch/general/vast/$USR/$SLURM_JOB_ID
mkdir -p $SCRDIR
cp -r $WORKDIR/* $SCRDIR
cd $SCRDIR

# load appropriate modules
module load intel mpich2

# run the program
mpirun -np $SLURM_NTASKS my_mpi_program > my_program.out

 

For more details and example scripts please see our Slurm documentation. Also, to help with specifying your job and instructions in your slurm script, please review CHPC Policy 2.2.1 Ash Job Scheduling Policy.

Job Submission on Ash

In order to submit a job on ash one has to login first into a ash interactive node (see above: Ash Access). Note that this is a change from the way job submission has worked in the past on our other clusters where you could submit from any interactive node to any cluster. 

To submit a script named slurmjob.ash, just type:

sbatch slurmjob.ash

Checking the status of your job in slurm

To check the status of your job, use the "sinfo" command

sinfo

For information on compiling on the clusters at CHPC, please see  our Programming Guide

Last Updated: 7/5/23