You are here:

GPUs and Accelerators at CHPC

The CHPC has a limited number of cluster compute nodes with GPUs. The GPU devices are to be found on the Ember, Kingspeak, Notchpeak and Redwood (Protected Environment (PE)) clusters. This document describes the hardware, as well as access and usage of these resources.

Hardware overview

Access and running jobs

GPU programming environment

Installed GPU codes

GPU Hardware Overview

CHPC has a limited number of GPU nodes on notchpeak, kingspeak, ember and redwood (PE).

Table 1: Characteristics of the CHPC nodes with GPU Devices
Nodes GPU
Type
#GPU
Devices
per Node
GPU Global
Memory
(GB)
Compute
Capability
Gres flag/
Access Code
General
Node?
notch{001-003} Tesla V100 3 16 7.0 gpu:v100:{1-3} Y
notch004 RTX 2080 Ti 2 11 7.5 gpu:2080ti:{1-2} Y
Tesla P40 1 24 6.1 gpu:p40:1 Y
notch055 Titan V 4 12 7.0 gpu:titanv:{1-4} N
notch060 GTX 1080 Ti 8 11 6.1 gpu:1080ti:{1-8} N
notch083 RTX 2080 Ti 4 11 7.5 gpu:2080ti:{1-4} N
notch084 RTX 2080 Ti 4 11 7.5 gpu:2080ti:{1-4} N
notch085 RTX 2080 Ti 4 11 7.5 gpu:2080ti:{1-4} N
notch{086-088} RTX 2080 Ti 4 11 7.5 gpu:2080ti:{1-4}  Y
notch089 RTX 2080 Ti 4 11 7.5 gpu:2080ti:{1-4}  N
kp{297-298} GTX Titan X 8 12 5.2 gpu:titanx:{1-8} Y
kp{299-300} Tesla K80 8 12 3.7 gpu:k80:{1-8} Y
kp{359-362} Tesla P100 2 16 6.0 gpu:p100:{1-2} N
em{001-004}
em{006-008}
em{011-012}
Tesla M2090 2 6 2.0 gpu:m2090:{1-2} Y
rw{085-086} GTX 1080 Ti 4 11 6.1 gpu:1080ti:{1-4} Y

NOTCHPEAK

Notchpeak contains 13 compute nodes with GPU devices. Notch060 is a GPU node with 16 physical CPU cores (Intel Xeon Silver 4110 CPU @ 2.10 GHz) and 96 GB memory. The nodes notch{001-003,004,055,083,084} have 32 physical CPUS cores (Intel Xeon Gold 6130 CPU @ 2.10 GHz) and 192 GB memory. The node notch085 has 32 physical CPU cores (Intel Xeon Silver 4216 CPU @ 2.10GHz) and 384 GB memory. The nodes notch{086-089} have 40 physical cores ( Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz) and 192 GB Memory.

 

  • the nodes notch001, notch002 and notch003 each contain 3 GPU devices of the type Tesla V100 (Volta generation). Each GPU has 16 GB of global memory.  Peak double-precision performance of a Tesla V100  is 7 TFlops; its peak single-precision performance is 14 TFlops. Its memory bandwidth is 900 GB/s. It also contains 5120 CUDA cores.
  • notch004 contains 1 Tesla P40 device of the Pascal generation. It contains 24 GB of global memory (GDDR5X) with a memory bandwidth of 346 GB/s. It has a single-precision performance of 12 TFlops; the double-precision performance is 0.35 TFlops. notch004 also contains 2 RTX 2080 Ti devices (Turing generation) like the ones in notch083 and notch84 (see below for the details).
  • notch055 contains 4 Titan V devices (Volta generation). Each device has 5120 CUDA cores and a global memory of 12 GB (HBM2). Its memory bandwidth is 652.8GB/s. Each device has the following performance specifics: 6.9 TFlops (Double Precision), 13.8 TFlops (Single Precision), 27.6 (Half Precision) & 110 TFlops (Tensor Performance - Deep Learning).
  • notch060 contains 8 GTX 1080 Ti devices (Pascal generation). Each device contains 11 GB of global memory (GDDR5X) with a memory bandwidth of 484 GB/s. It also contains 3584 CUDA cores. Each GPU Card has a single-precision performance of 10.6 TFlops. The double-precision performance is only 0.33 TFlops.
  • notch{083-089} each contain 4 RTX 2080 Ti devices (Turing generation). Each GPU device has 11 GB (GDDR6) global memory with a memory bandwidth of 616 GB/s. It also contains 4352 CUDA cores. The single-precision performance of each GPU device is 11.75 TFlops. Its double-precision performance is 0.37 TFlops.

Note that the nodes notch055, notch60, notch083 and notch084 are owned by research groups. CHPC users who do not belong to the research groups who own these device can use those device if and only if they use the SLURM partition notchpeak-gpu-guest.

Kingspeak

The GPU nodes kp297, kp298, kp299 & kp300 have each two 6-core Intel Haswell generation CPUs (2.4 GHz) and 64 GB memory. The remaining GPU nodes i.e., kp359, kp360, kp361 & kp362 have each 2 14-core Intel Broadwell processors (E5-2680 v4 running @ 2.4 GHz) and 256 GB RAM. The nodes kp{359-362} are general nodes (i.e. owned by the CHPC). The GPU nodes kp{359-362} are owned by the School of Computing (SOC). Therefore, CHPC users outside SOC need to the use the SLURM kingspeak-gpu-guest partition if they want to get access the GPUS of kp{359-362} nodes.

  • The nodes kp297 and kp298 each have 8 GeForce TitanX  devices (Maxwell generation). Each of these GPU devices has 12 GB of global memory. Each device has great single-precison performance (~7 TFlops) but does rather poor with double-precision (max ca. 200 GFlops).  Therefore, the TitanX nodes should be used for either single-precision, or mixed single-double precision GPU codes. 
  • The nodes kp299 and kp300 each have 8 Tesla K80 devices (Kepler generation). Each K80 card consists of two GPUs, each GPU having 12 GB of global memory. Thus the K80 nodes will show 8 total GPUs available. Peak double-precision performance of a single K80 card is 1864 GFlops.
  • The four nodes kp{359-362} each have 2 Tesla P100 devices (Pascal generation). Each device has 16 GB global memory. Each GPU card contains 56 multiprocessors with each 64 cores (3584 CUDA cores in total). The ECE support is (currently) disabled.The system interface is a PCIe Gen3 Bus. The double-precision performance per card is 4.7 TFlops. The single-precision is 9.3 TFlops anf the half-precision performance is 18.7 TFlops. The P100 nodes have each 2 14-core Intel Broadwell processors (E5-2680 v4 running @ 2.4 GHz) and 256 GB RAM.
Ember

The Ember cluster has 9 nodes which have two Tesla M2090 cards (Fermi generation) each. Each card has 6 GB of global memory. Although relatively old, each card has a peak double precision floating point performance of 666 GFlops, still making it a good performer. The GPU nodes have two 6-core Intel Westmere generation CPUs and 24 GB of host RAM.

Redwood (Protected Environment)

Redwood contains 2 compute nodes with GPU devices. Both compute nodes have 32 physical CPUS cores (Intel Xeon Gold 6130 CPU @ 2.10 GHz) and 192 GB memory.

  • The nodes rw{085-086} each have 4 GTX 1080 Ti devices (Pascal generation). Each device contains 11 GB of global memory (GDDR5X) with a memory bandwidth of 484 GB/s. It also contains 3584 CUDA cores. 

Access and running jobs, node sharing

ACCESS

The use of the GPU nodes does not affect your allocation  (i.e. their usage does not count against any allocation your group may have).  However, we have restricted the access to the GPU nodes to users who have a special GPU account, which needs to be requested. Please e-mail helpdesk@chpc.utah.edu to do so.

Running jobs and Node sharing

There are 2 categories of GPU nodes:

  • General nodes ( labeled Y in the last column of Table 1) i.e. nodes owned by the CHPC. The corresponding SLURM partition and account settings are:
    • --account=$(clustername)-gpu
    • --partition=$(clustername)-gpu 
    where $(clustername)$ stands for eithernotchpeak, kingspeak, emberorredwood.
  • Owner nodes (non-general nodes) i.e. nodes owned by a research group (labeled N in the last column of Table 1).
    • For members of the groups who own GPU nodes a specific instance of a SLURM account & partition settings will be given to get access to these nodes. For example, the members of the group who own the nodes kp{359-362}, need to use the following settings:
      • --account=soc-gpu-kp
      • --partition=soc-gpu-kp
    • Users outside the group can also use these devices. The corresponding account & partition names are:
      • --account=owner-gpu-guest
      • --partition=$(clustername)-gpu-guest
      where $(clustername) stands for either notchpeakor kingspeak. Note that the jobs by users outside the group may be subjected to preemption. 

If one wants to access the GPU devices on a node one must specify the generic consumable resources flag (a.k.a. gres flag). The gres flag has the following syntax:

--gres=$(resource_type)[:$(resource_name):$(resource_count)]
where:

  • $(resource_type) is always equal to gpu string for the GPU devices.
  • $(resource_name) is a string which describes the type of the requested gpu(s) e.g. k80, titanv, 2080ti, ....
  • $(resource_count) is the number of gpu devices that are requested of the type $(resource_name)$.
    Its value is an integer in the closed interval: {1,max. number of devices on one node}
  • the [ ] means optional parameters, that is, to request any single GPU (the default for the count is 1), regardless of a type, --gres=gpuwill work. To request more than one GPU of any type, one can add the $(resource_count), e.g. --gres=gpu:2

The gres flag attached to each type of node can be found in the second-to-last column of Table 1.
For example, the flag --gres=gpu:titanx:5 must be used to request 5 GTX Titan X devices that can only be satisfied by the nodes kp297 and kp298. 

Note that if you do not specify the gres flag, you will end on a GPU node (presuming you use the correct combination of the --partition and --account flag), but your job will not have access the node's GPUs. 
You can easily check this hypothesis by the following command:
echo $CUDA_VISIBLE_DEVICES
An empty output string implies NO access to the node's GPU devices.

Some programs are serial, or able to run only on a single GPU; other jobs perform better on a single or small number of GPUs and therefore cannot efficiently make use of all of the GPUs on a single node. Therefore, in order to better utilize our GPU nodes,  node sharing has been enabled for the GPU partitions. This allows multiple jobs to run on the same node, each job being assigned specific resources (number of cores, amout of memory, number of accelerators). The node resources are managed by the scheduler up to the maximum available on each node. It should be noted that while efforts are made to isolate jobs running on the same node, there are still many shared components in the system.  Therefore a job's performace can be affected by the other job(s) running on the node at the same time and if you are doing benchmarking you will want to request the entire node even if your job will only make use of part of the node.

Node sharing can be accessed by requesting less than the full number of gpus, core and/or memory. Note that node sharing can also be done on the basis of the number of cores and/or memory, or all three. By default, each job gets  2 GB of memory per core requested (the lowest common denominator among our cluster nodes), therefore to request a different amount than the default amount of memory, you must use --mem flag . To request exclusive use of the node, use --mem=0.

When node sharing is on (default unless asking full number of GPUs, cores or memory), the SLURM scheduler automatically sets task to core affinity, mapping one task per physical core. To find what cores are bound to the job's tasks, run:

cat /cgroup/cpuset/slurm/uid_$SLURM_JOB_UID/job_$SLURM_JOB_ID/cpuset.cpus

Below is a list of useful job modifiers for use:

Option Explanation
#SBATCH --gres=gpu:k80:1 request one K80 GPU
#SBATCH --mem=4G request 4 GB of RAM 
#SBATCH --mem=0

request all memory of the node; this option also
ensures node is in exclusive use by the job

#SBATCH --ntasks=1 requests 1 task, mapping it to 1 CPU core

 

An example script that would request two Ember nodes with 2xM2090 GPUs, including all cores and all memory, running one GPU per MPI task, would look like this:

#SBATCH --nodes=2
#SBATCH --ntasks=4
#SBATCH --mem=0

#SBATCH --partition=ember-gpu
#SBATCH --account=ember-gpu
#SBATCH --gres=gpu:m2090:2
#SBATCH --time=1:00:00
... prepare scratch directory, etc
mpirun -np $SLURM_NTASKS myprogram.exe

 To request all 8 K80 GPUs on a kingspeak node, again using one GPU per MPI task, we would do:

#SBATCH --nodes=1
#SBATCH --ntasks=8
#SBATCH --mem=0
#SBATCH --partition=kingspeak-gpu
#SBATCH --account=kingspeak-gpu
#SBATCH --gres=gpu:k80:8
#SBATCH --time=1:00:00
... prepare scratch directory, etc
mpirun -np $SLURM_NTASKS myprogram.exe

As an example, using the script below  will get four GPUs,  four CPU cores, and 8GB of memory.  The remaining GPUs, CPUs, and memory will then be accessible for other jobs.

#SBATCH --time=00:30:00             
#SBATCH --nodes=1    
#SBATCH --ntasks=4
#SBATCH --gres=gpu:titanx:4
#SBATCH --account=kingspeak-gpu
#SBATCH --partition=kingspeak-gpu

The script below will ask for 14 CPU cores, 100 GB of memory and 1 GPU card on one of the P100 nodes.

#SBATCH --time=00:30:00             
#SBATCH --nodes=1    
#SBATCH --ntasks=14
#SBATCH --gres=gpu:p100:1
#SBATCH --mem=100GB
#SBATCH --account=owner-gpu-guest
#SBATCH --partition=kingspeak-gpu-guest

To run a parallel interactive job with MPI, do not use the usual srun command, as this does not work correctly with the gres. Instead, use the salloc command, e.g.

salloc -n 2 -N 1 -t 1:00:00 -p kingspeak-gpu -A kingspeak-gpu --gres=gpu:titanx:2

This will allocate the resources to the job, namely two tasks and two GPUs, but keeps the prompt on the interactive node. You can then use srun or mpirun commands to launch the calculation on the allocated compute node resources. To specify more memory than the default 2GB per task, use the --mem option.

For serial, non-MPI jobs, utilizing one or more GPUs, srun is functional, e.g.

srun -n 1 -N 1 -A owner-gpu-guest -p kingspeak-gpu-guest --gres=gpu:p100:1 --pty /bin/bash -l

GPU programming environment

On all GPU nodes Nvidia CUDA, PGI CUDA Fortran and the OpenACC compilers are installed. The CUDA default installation is to be found at  /usr/local/cuda, or by simply loading the CUDA module, module load cuda. Different CUDA versions can be also used, run  module spider cudafor a list of available versions. PGI compilers come with their own CUDA which is quite recent, and can be set up by loading the PGI module, module load pgi.

We recommend to get an interactive session on the compute node in order to compile a CUDA code, but, in a pinch, any interactive node should work as the CUDA is installed on the interactives as well. PGI compilers come with their own CUDA so compiling anywhere from where you can load the PGI module should work.

To compile CUDA code so that it runs on all the four types of GPUs that we have, use the following compiler flags:  -gencode arch=compute_20,code=sm_20 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_70,code=sm_70. For more info on the CUDA compilation and linking flags, please have a look at  http://docs.nvidia.com/cuda/pdf/CUDA_C_Programming_Guide.pdf.

The PGI compilers specify the GPU architecture with the-tp=tesla flag. If no further option is specified, the flag will generate code for all available computing capabilities (at the time of writing cc20, cc30, cc35, cc50 and cc60). To be specific for each GPU,-tp=tesla:cc20 can be used for the M2090, -tp=tesla:cc35 for the K80,-tp=tesla:cc50 for the TitanX and-tp=tesla:cc60 for the P100. To invoke the OpenACC, use -acc flag. More information on OpenACC can be obtained at http://www.openacc.org.

Good tutorials on GPU programming are available at the CUDA Education and Training site from Nvidia.

When running the GPU code, it is worth checking the resources that the program is using, to ensure that the GPU is well utilized. For that, one can run the nvidia-smi command, and watch for the memory and CPU utilization. nvidia-smi is also useful to query and set various features of the GPU, see nvidia-smi --help for all the options that the command takes. For example, nvidia-smi -L  lists the GPU card properties. On the TitanX nodes:

 Titan Card: GPU 0: GeForce GTX TITAN X (UUID: GPU-cd731d6a-ee18-f902-17ff-1477cc59fc15)
Debugging

Nvidia's CUDA distribution includes a terminal debugger named cuda-gdb. Its operation is similar to the GNU gdbdebugger. For details, see the cuda-gdb documentation.

For out of bounds and misaligned memory access errors, there is the cuda-memcheck tool. For details, see the cuda-memcheck documentation.

The Totalview debugger that we license used to license and DDT debugger that we currently license also support CUDA and OpenACC debugging. Due to its user friendly graphical interface we recommend them for GPU debugging. For information on how to use DDT or Totalview, see our debugging page.

Profiling

Profiling can be very useful in finding GPU code performance problems, for example inefficient GPU utilization, use of shared memory, etc. Nvidia CUDA provides both command line (nprof) and visual profiler (nvvp). More information is in the CUDA profilers documentation.

 

 Installed GPU codes

 We have the following GPU codes installed:

Code name Module name  Prerequisite  modules Sample batch script(s) location Other notes
HOOMD hoomd

gcc/4.8.5

mpich2/3.2.g

/uufs/chpc.utah.edu/sys/installdir/hoomd/2.0.0g-[sp,dp]/examples/  
VASP vasp

intel

impi

cuda/7.5

/uufs/chpc.utah.edu/sys/installdir/vasp/examples Per group license, let us know if you need access
AMBER amber-cuda

gcc/4.4.7

mvapich2/2.1.g

 adapt CPU script  
LAMMPS lammps/10Aug15        intel/2016.0.109  impi/5.1.1.109 adapt CPU script  

 

If there is any other GPU code that you would like us installed, please let us know.

Some commercial programs that we have installed, such as Matlab, also have GPU support. Either try them or contact us.

Last Updated: 5/30/19