Skip to content

CHPC - Research Computing and Data Support for the University

In addition to deploying and operating high performance computational resources and providing advanced user support and training, CHPC serves as an expert team to broadly support the increasingly diverse research computing and data needs on campus. These needs include support for big data, big data movement, data analytics, security, virtual machines, Windows science application servers, protected environments for data mining and analysis of protected health information, and advanced networking.

If you are new to CHPC, the best place to start to get more information on CHPC resources and policies is our Getting Started page.

Upcoming Presentations:


Posted September 22, 2023


Posted September 8th, 2023

CHPC is reaching out to remind our users of their responsibility to understand what the software being used is doing, especially software that you download, install, or compile yourself. Read More...


Posted August 23, 2023

General Environment FastX Outage Monday August 21, 2023 starting at 9am

Posted August 11, 2023

Allocation Requests for Fall 2023 are Due September 1st, 2023

Posted August 1, 2023

CHPC ANNOUNCEMENT:  Establishing quotas on /scratch/general/vast effective August 1, 2023

Posted July 5, 2023

FastX3 Outage - Saturday, July 1st, license issue resolved by 7:45pm

Posted July 3, 2023

CHPC Downtime: Tuesday July 11, 2023 starting at 7:30 am

Posted June 26, 2023

Spring 2023 CHPC Newsletter

Posted April 19th, 2023

CHPC ANNOUNCEMENT: CHPC staff working both remotely and hybrid schedules. 

News History...

Identify Atypical Wind Events

Pando Object Storage Archive Supports Weather Research

By Brian K. Blaylock1, John D. Horel1&2, Chris Galli1&2

1Department of Atmospheric Sciences, University of Utah; 2Synoptic Data, Salt Lake City, Utah

Terabytes of weather data are generated every day by gridded model simulations and in situ and remotely sensed observations. With this accelerating accumulation of weather data, efficient computational solutions are needed to process, archive, and analyze the massive datasets. The Open Science Grid (OSG) is a consortium of computer resources around the United States that makes idle computer resources available for use by researchers in diverse scientific disciplines. The OSG is appropriate for high-throughput computing, that is, many parallel computational tasks. This work demonstrates how the OSG has been used to compute a large set of empirical cumulative distributions from hourly gridded analyses of the High-Resolution Rapid Refresh (HRRR) model run operationally by the Environmental Modeling Center of the National Centers for Environmental Prediction. The data is being archived within Pando, an archive named after the vast stand of aspen trees in Utah. These cumulative distributions derived from a 3-yr HRRR archive are computed for seven variables, over 1.9 million grid points, and each hour of the calendar year. The HRRR cumulative distributions are used to evaluate near-surface wind, temperature, and humidity conditions during two wildland fire episodes—the North Bay fires, a wildfire complex in Northern California during October 2017 that was the deadliest and costliest in California history, and the western Oklahoma wildfires during April 2018. The approach used here illustrates ways to discriminate between typical and atypical atmospheric conditions forecasted by the HRRR model. Such information may be useful for model developers and operational forecasters assigned to provide weather support for fire management personnel.

Read the article in the Journal of Atmospheric and Oceanic Technology.

System Status

General Environment

last update: 2023-09-26 20:00:03
General Nodes
system cores % util.
kingspeak 929/988 94.03%
notchpeak 1177/3212 36.64%
lonepeak 3200/3236 98.89%
Owner/Restricted Nodes
system cores % util.
ash 3632/3812 95.28%
notchpeak 16025/16908 94.78%
kingspeak 5396/5484 98.4%
lonepeak 416/416 100%

Protected Environment

last update: 2023-09-26 20:00:03
General Nodes
system cores % util.
redwood 64/552 11.59%
Owner/Restricted Nodes
system cores % util.
redwood 720/5980 12.04%

Cluster Utilization

Last Updated: 9/26/23