Skip to content

CHPC Getting Started Guide

This guide is for users new to CHPC and is intended to provide an overview of our high performance computing systems, but is not meant to replace the need to read over our other documentation. It also does not address the other areas which are supported by CHPC, such as virtual machines, the protected environment, networking support, and storage. Please send any comments or feedback on this guide to helpdesk@chpc.utah.edu.

User Responsibilities

When you applied for your account you signed and agreed to the following:

In obtaining this account I agree to use the Center for High Performance Computing resources solely for the purposes connected with my University of Utah or Utah State University affiliation(s) and agree not to allow my access to be used in a manner which could permit unauthorized use of the computer. During the use of this account, certain proprietary software may be made available. Availability of said software is for my use only and may not be copied to any other machine(s) or made available to any other person(s).

CHPC is an important but limited resource for the University of Utah and Utah State University communities. It is CHPC's responsibility to protect these facilities and safeguard their proper use. Because we cannot do this job alone, we depend on your assistance. Responsible use of the system and cooperation on your part helps us maximize availability for you and other researchers.

If CHPC determines that you have not used the CHPC resources in an appropriate fashion, you could lose your account, your current allocation, and your ability to qualify for future allocations.

The first step is to submit a request to CHPC for an account on our systems. This process is described in the CHPC Policy Manual section on account creation.

When you receive your HPC account, you are given access to all of the High Performance Computing Platforms supported by CHPC. They are available through the ssh (secure shell) utility. We only allow protocol 2 to access our systems and do not allow "ssh keys" for security reasons.

When provisioned your account will be populated with the necessary login files for both a tcsh and a bash environment. Depending on the shell choice you made when setting up the account the appropriate files will be used to set your environment so that you can use the clusters.  The provided login files include a .tcshrc, .bashrc, .custom.sh and .custom.csh.  These files will use the LMod module system to set the environment. We suggest that you do not modify the .tcshrc and .bashrc files, but instead use the .custom.sh and .custom.csh, along with creating a .aliases file to customize your interactive environment.

The general HPC clusters currently supported are:

  • Notchpeak Cluster:
    • ssh notchpeak.chpc.utah.edu
  • Kingspeak Cluster:
    • ssh kingspeak.chpc.utah.edu
  • Lonepeak Cluster:
    • ssh lonepeak.chpc.utah.edu

And in the protected environement (for work with sensitive, restricted, or other forms of protected data) the HPC cluster is

  • Redwood Cluster:
    • ssh redwood.chpc.utah.edu

When you ssh to any of the above, you land on one of the interactive or login nodes of the clusters - note that we highly recommend that you use FastX to connect to the clusters.  

The login nodes are where you can manage your workflow, such as editing files, submitting jobs, and analyzing your output.  All substantial computational work is completed on the compute nodes which are accessed through a batch system. The clusters all use Slurm for managing the batch system. 

Each of the above clusters is operated in a condominium style, with general CHPC-owned nodes as well as nodes owned by individual research groups. On notchpeak and redwood, there is an allocation system for time on the general CHPC-owned nodes (see Allocation section below); the kingspeak and lonepeak clusters are run unallocated. The owner nodes are available for all CHPC users to run as guests in a preemptable manner when not being used by the owner group. 

In addition, we have a restricted cluster ash owned by a single group to which guest access is allowed via ash-guest.chpc.utah.edu. 

In the CHPC documentation there is an  introduction to slurm as well as information on to slurm accounts and partitions.

Finally, we also have two windows systems that are focused statistics systems, accessible via remote desktop. They are beehive, for in the general environment, and narwhal,  in the protected environment for work with protected data.

Your password for all Unix systems and for samba/cifs mounts of fileservers maintained by CHPC is your campus CIS password. Please go to https://cis.utah.edu/ to change your password.

For Windows OS computers that are joined to the CHPC Active Directory (chpc.ad.utah.edu), your password is also maintained via CIS (https://cis.utah.edu/). If your Windows computer is not joined to the AD, then your password is maintained locally with that machine (local account). Users of CHPC administrated systems can email helpdesk@chpc.utah.edu for assistance.

CHPC provides home directory space which is NFS mounted on all of the HPC systems. This file system is not backed up. Users are responsible for moving important data to a more permanent location such as their home department file server. Individual groups can purchase additional home directory space that includes backup to tape, as well as group space that can include quarterly archive backup to tape. For more information, see Disk Usage Information.

CHPC also provides several layers of scratch on the various HPC platforms we support. Please see the CHPC Disk Usage Policy.

For the general environment, CHPC awards allocations on our systems based on proposals submitted to a review committee. Research groups must submit requests for allocations. Please refer to our  Allocation Documentation and our Allocations Policy for detailed information. Please note that allocation requests must be completed by the CHPC PI or a designated delegate. To specify a delegate, the CHPC PI must email helpdesk@chpc.utah.edu and provide the name and uNID of the user you wish to have allocation delegate rights. 

Please note that the protected environment has a separate allocation process; see the protected environment allocation information page for details.

The overview of CHPC resources is a good starting place to learn about the different resources CHPC provides.

Information common to all of the HPC clusters is given in the  HPC Cluster User Guide, with information specific to a given cluster found in the following  web-based User Guides for the individual clusters:

In addition, there are user guides for our windows statistic machines:

There also exists a software documentation page that includes information on programming tools, access utilities, file transfer utilities, the batch system, and select applications installed on the CHPC HPC clusters. 

These should which should help you get going on a given CHPC resource.  However, if, after going through these guides and examples, you find you are still having difficulty getting started, please contact CHPC by sending a message to helpdesk@chpc.utah.edu stating that you wish to meet with a consultant for some assistance.

All of our documentation is available on the Web (including a FAQ), which you may browse through by visiting CHPC Documentation.

If you have a question or run into problems, please contact the CHPC Help Desk.

The best way to contact us is by emailing helpdesk@chpc.utah.edu and stating your question or problem. Please provide as much detail as possible, including your name and contact information, which system you're having trouble with, when the problem occurred, and any job numbers, scripts, or error messages which apply to the problem. For more information, see Getting Help.

The CHPC Presentation Series is presented every fall semester with select presentations repeated in the spring  and summer semesters. These presentations are designed to help you learn about CHPC, parallel programming, some of the specialized software CHPC supports, and other topics we think might assist you in your research. The link to the presentations (above) also includes links to the latest slides for the talks along with abstracts about each presentation.  The presentations can be joined remotely via Zoom. Please let us know if you have ideas for presentations you would like added to our series.

 

 

Last Updated: 7/5/23