CHPC - Research Computing Support for the University

In addition to deploying and operating high performance computational resources and providing advanced user support and training, CHPC serves as an expert team to broadly support the increasingly diverse research computing needs on campus. These needs include support for big data, big data movement, data analytics, security, virtual machines, Windows science application servers, protected environments for data mining and analysis of protected health information, and advanced networking. Visit our Getting Started page for more information.

 XSEDE OpenMP workshop reminder

INSCC Auditorium
Tuesday, October 6th, 2015
9 am - 3 pm with 1 hr break 11 - Noon

Introduction to Programming with MPI

INSCC Auditorium
Thursday, October 8th, 2015
1:00 - 2:00 pm

/scratch/ibrix/chpc_gen is 94% full

Please check your usage and clean up as much as space as possible.

NVidia online OpenACC course and free development toolkit

Posted September 22, 2015

CHPC on Twitter

  • CHPCUpdates:  CHPC Updates Twitter
  • CHPCOutages:  

 News History...

CHPC Resources Aid Study of Massive Galaxies

University of Utah astronomers are using CHPC parallel computing resources to study galaxy evolution and cosmology. Adam Bolton, assistant professor in the department of Physics and Astronomy, and his research group are members of the Baryon Oscillation Spectroscopic Survey (BOSS) project within the international Sloan Digital Sky Survey III (SDSS-III) collaboration.  BOSS is currently building the largest ever three-dimensional map of galaxies using a 2.5-meter telescope at Apache Point Observatory (APO) in New Mexico.  Researchers are measuring the statistical patterns within this map in order to understand the nature of the mysterious “dark energy” that seems to be accelerating the expansion rate of the universe. Kyle Dawson, another member of the University’s Department of Physics and Astronomy, is also heavily involved in the BOSS project.

System Status

last update: 10/07/15 11:22 am
General Nodes
system procs % util.
ember 864/984 87.8%
kingspeak 832/832 100%
lonepeak 256/256 100%
Restricted Nodes
system procs % util.
ash 4720/6316 74.73%
apexarch 0/152 0%
ember 636/696 91.38%
kingspeak 3488/3812 91.5%
lonepeak 0/976 0%

Cluster Utilization