CHPC News

This section is here to provide the user of an overview and index of what can be found in the News directories.

  • News Items: History of all news announcements made by CHPC this year including notifications of presentations, downtimes and policy changes.
  • 2013 News Items: History of all news announcements make by CHPC in 2013 including notifications of presentations, downtimes and policy changes.
  • 2012 News Items: History of all news announcements make by CHPC in 2012 including notifications of presentations, downtimes and policy changes.
  • 2011 News Items: History of all news announcements make by CHPC in 2011 including notifications of presentations, downtimes and policy changes.
  • 2010 News Items: History of all news announcements make by CHPC in 2010 including notifications of presentations, downtimes and policy changes.
  • 2009 News Items: History of all news announcements make by CHPC in 2009 including notifications of presentations, downtimes and policy changes.
  • 2008 News Items: History of all news announcements make by CHPC in 2008 including notifications of presentations, downtimes and policy changes.
  • 2007 News Items: History of all news announcements make by CHPC in 2007 including notifications of presentations, downtimes and policy changes.
  • 2006 News Items: History of all news announcements make by CHPC in 2006 including notifications of presentations, downtimes and policy changes.
  • 2005 News Items: History of all news announcements make by CHPC in 2005 including notifications of presentations, downtimes and policy changes.
  • 2004 News Items: History of all news announcements make by CHPC in 2004 including notifications of presentations, downtimes and policy changes.
  • Downtimes: Downtimes in the current year.
  • 2013 Downtimes: Downtimes in 2013.
  • 2012 Downtimes: Downtimes in 2012.
  • 2011 Downtimes: Downtimes in 2011.
  • 2010 Downtimes: Downtimes in 2010.
  • 2009 Downtimes: Downtimes in 2009.
  • 2008 Downtimes: Downtimes in 2008.
  • 2007 Downtimes: Downtimes in 2007.
  • 2006 Downtimes: Downtimes in 2006.
  • 2005 Downtimes: Downtimes in 2005.
  • 2004 Downtimes: Downtimes in 2004.
  • Current System Status: Summary of Current System Status of HPC Systems
  • Newsletters: CHPC Newsletters, past and present (PDF and HTML)
  • Presentations

Most Recent News Items:

FastX: New Tool for Remote Graphical Sessions

Posted: April 24, 2014

CHPC has recently deployed FastX, a tool which allows users to interact with remote systems in a graphical manner in much more efficient, effective way than by using simple X forwarding. The new tool allows users to display individual applications or whole desktop environments, and also allows for detaching/reattaching (from the same or a different location) to the graphical sessions.

The FastX server is now on all of the cluster interactive nodes. In addition we are in the process of provisioning a set of nodes to use for visualization needs. They are friscoX.chpc.utah.edu, where X=1-6. Frisco1-4 are now in service and frisco5-6, which have graphics cards allowing for the use of graphical hardware acceleration (OpenGL), will be ready in the next week or so.

The terms of our license agreement allow FastX clients to be installed on both university owned and personal machines.

For details of how to download a client, how to use FastX, and CHPC's usage policies for this tool please see: https://wiki.chpc.utah.edu/display/DOCS/FastX

Please contact us with questions and any issues with the use of FastX.


New general KP nodes and New Guest Access Options

Posted: April 21, 2014

Kingspeak general CHPC nodes now include four 20-core nodes along with the original 32 16-core nodes. This results in new ways to request general nodes. These have been added to the “Resource Specification Section” of the Kingspeak User Guide found at https://wiki.chpc.utah.edu/display/DOCS/Kingspeak+User+Guide

Also, there are new options for guest access to compute nodes. All guest jobs are pre-emptible by any jobs submitted by the owner of the nodes and jobs using this guest access should not use /scratch/local as there is no mechanism to recover output or clean up scratch files for jobs that are preempted. Jobs run in this manner are charged to the guest account and can be run even if you still have your own allocation. Here are the available guest access options:

  • For owner nodes on kingspeak and ember use ‘#PBS –A owner-guest’
  • For ash.chpc.utah.edu use ‘#PBS –A smithp-guest’ (use ash-guest.chpc.utah.edu to access general ash interactive nodes)
  • For telluride.chpc.utah.edu use ‘#PBS –A cheatham-guest’ (please note that this cluster is running RH5)

Finally, I would like to remind users about CHPC policy regarding interactive node usage. Any processes on the cluster interactive nodes that run longer than 15 minutes OR that impact the ability of others to make use of the node will be killed. The complete policy can be found at https://wiki.chpc.utah.edu/display/policy/2.1.1+Cluster+Interactive+Node+Policy

As always, if you have any questions regarding CHPC resources and policies, please contact us at issues@chpc.utah.edu


Virtual School of Computational Science and Engineering this summer

Posted: April 9, 2014

CHPC will again be a sattelite site for this year's summer classes of the Virtual School of Computational Science and Engineering.

There will be two classes, each of which will take 4 days. They will both take place in the INSCC 110 Auditorium. For more information about these classes and registration please follow the links below. When registering choose University of Utah as a site.

Harness the Power of GPUs: Introduction to GPGPU Programming
June 16 – June 20, 2014
9:00 a.m. - 2:00 p.m. MST
Register at https://www.xsede.org/web/xup/course-calendar/-/training-user/class/264.

Data Intensive Summer School
June 30 – July 2, 2014
9:00 a.m. - 3:00 p.m. MST
Register at https://www.xsede.org/web/xup/course-calendar/-/training-user/class/263.

Note that these classes are open to anyone, not just to the University of Utah affiliates. We encourage participants from other institutions to register.


Matlab updated to R2014a

Posted: April 7, 2014

We have updated Matlab to the latest release, R2014a. As a part of the process we now run a new redundant license server setup which should make the license availability more resilient.

An important new feature in this release is a removal of the limit on number of parallel workers per node (previously at 12), which should be beneficial on CHPC cluster nodes with higher core counts.

For complete list of release notes for R2014a see http://www.mathworks.com/help/relnotes/index.html.

Also please note that this version of Matlab does not run on RHEL5 Linux OS, which we still run on some machines (meteo, atmos and turretarch nodes). If you use these machines, keep using R2013a by running "source /uufs/chpc.utah.edu/sys/pkg/matlab/R2013a/etc/matlab.csh" after login to that machine. OS upgrades on these machines are planned in the near future.


Guest access to ash.chpc.utah.edu now available

Posted: March 27, 2014

As was earlier announce, the smithp ember nodes were removed from ember and used to create a new cluster called ash. We are now opening up guest access to this cluster. All jobs run in this manner are pre-emptible and there are no charges against your group allocation for this usage.

Some important information:

  • You need to first get the new login files from here for tcsh and from here for bash
  • SSH hostids have changed. You should deleted your .ssh/known_hosts or edited this file to remove all entries for the compute nodes (emXXX)
  • The chpc general interactive nodes for ash are ash5.chpc.utah.edu and ash6.chpc.utah.edu. You can use ash-guest.chpc.utah.edu and will obtain one of these two nodes. The name ash.chpc.utah.edu are the interactive nodes restricted to the Smith group.
  • The cluster specific applications are in /uufs/ash.peaks/sys/pkg. We have tested what we can and all seems fine. /uufs/chpc.utah.edu/sys/pkg is also mounted as it was on ember for non-cluster specific application builds.
  • If you are running your own codes, you should recompile using the new ash version of mpi, etc.
  • Scratch file systems mounted are /scratch/kingspeak/serial and/scratch/ibrix/chp_gen
  • It is best if you do not use /scratch/local as a guest as you do not have access to retrieve/clean up any files from this location in the likely event that your job is preempted.
  • The account to use in the #PBS -A line are smithp-guest

Please address any questions and report any issues to issues


Machine Learning and Parallel Computing with MATLAB

Posted: March 26, 2014

Please join MathWorks at complimentary MATLAB seminar for educators, academic researchers and students at University of Utah:
Date:                         Wednesday, April 2nd
Location                      University of Utah, Intermountain Network Scientific (INSCC), Room 110
Time:                         9:00A.M. - 12:00P.M.

Register Now

The event features one technical session presented by a MathWorks engineer:

9:00A.M. – 10:45A.M.                    Machine Learning with MATLAB

Learn about several machine learning techniques available in MATLAB and how to quickly explore your data, evaluate machine learning algorithms, compare the results, and apply the best technique to your problem.

-------Break---------

11:00A.M. – 12:00P.M.                  Parallel Computing with MATLAB

Learn how to solve computationally and data-intensive problems using multicore processors and computer clusters. We will introduce you to high-level programming constructs that allow you to parallelize MATLAB applications without low level MPI programming and run them on multiple processors.

View complete session descriptions and register at https://www.mathworks.com/compan y/events/seminars/seminar90093.html.


telluride and ember back online

Posted: February 25, 2014

Both ember and telluride are back online and scheduling jobs. As a reminder, the smithp nodes have been split from ember, therefore the smithp-guest account will not run. Later this week or early next an announcement will be made about smithp-guest access to these nodes as part of the new ash cluster.

If you see any issues please report them to issues@chpc.utah.edu.


Kingspeak, Meteo nodes, and UCS (turretarch) back in service

Posted: February 25, 2014

The above mentioned resources are now back in service. Apexarch should be up in the next hour. There was an issue with a mislabeled breaker that lead to this outage. Now that this has been resolved, the scheduled electrical work at the DDC is progressing.


KIngspeak and Apexarch Down as of approximately 8:45AM

Posted: February 25, 2014

Duration: unknown

When the electricians were shutting off the power to ember and telluride they also unexpectedly shut off the power to kingspeak and apexarch (poet) --- we are right now trying to figure out why and if the power to these clusters need to be down for the electrical work being done today. Will update as more information is available.