2010 CHPC News Announcements

CHPC Presentation: Hybrid MPI-OpenMP Programming, Thursday December16, 1:00pm, INSCC Auditorium

Posted: December 13, 2010

Event date: December 16, 2010

CHPC Presentation: Hybrid MPI-OpenMP

By Martin Cuma
Location: INSCC Auditorium (Rm 110)
Date: December 16, 2010
Time: 1:00 p.m.

In this talk we will introduce hybrid MPI-OpenMP programming model designed for distributed shared memory parallel (DSMP) computers. The CHPC clusters are representative of this family having two or more shared memory processors per node. OpenMP generally provides better performing alternative for parallelization inside a node and MPI is used for communication between the distributed processors. We will discuss cases when hybrid programming model is beneficial and provide examples of simple MPI-OpenMP codes.


CHPCFS is back up

Posted: December 6, 2010

We have ended the brief downtime. If you have any issues accessing your filesystems, please send us an issue.


EMERGENCY DOWNTIME - CHPCFS

Posted: December 6, 2010

Duration: 30 Minutes

Systems Affected/Downtime Timelines: chpcfs, application tree and other group filesytems

This downtime is needed to fix the issues we have been having with the file systems. We will send a note when chpcfs is restored.


CHPC Presentation ** CANCELLED ** (*High-Performance Networks and Long-Distance Data Transfers)

Posted: November 29, 2010

Event date: December 2, 2010

The scheduled CHPC Presentation "High-Performance Networks and Long-Distance Data Transfers", originally scheduled for December 2, 2010 has been cancelled. This talk is planned to be re-scheduled during our Spring 2011 Presentation Series.


CHPC Presentation: **NEW** Using Python for Scientific Computing, Thurs. 11/11, 1:00 p.m., INSCC Auditorium (RM110)

Posted: November 8, 2010

Event date: November 11, 2010

CHPC Presentation

Statistical Resources at CHPC
Date: Thursday, November 11th, 2010
Time: 1:00 p.m.
by Wim Cardoen

In this talk we will discuss several features which make Python a viable tool for scientific computing:

  • 1. strength and flexibility of the Python language
  • 2. mathematical libraries (numpy,scipy,..)
  • 3. graphical libraries (matplotlib,..)
  • 4. extending Python using C/C++ and Fortran


CHPC Presentation: Statistical Resources at CHPC, Thurs. 11/4, 1:00 p.m., INSCC Auditorium (RM110)

Posted: November 3, 2010

CHPC Presentation

Statistical Resources at CHPC
Date: Thursday, November 4th, 2010
Time: 1:00 p.m.
by Byron Davis

This presentation gives users (and potential users) of CHPC's statistical resources an overview of the equipment and software presently available. Additionally a list of specialized statistical software will be presented that we've supported over the past 10 years or so.


CHPC Presentation: Mathematical Libraries at CHPC, Thurs. 10/28, 1:00 p.m., INSCC Auditorium (RM110)

Posted: October 26, 2010

Event date: October 28, 2010

CHPC Presentation

Mathematical Libraries at CHPC
Date: Thursday, October 28th, 2010
Time: 1:00 p.m.
by Martin Cuma

In this talk we introduce the users to the mathematical libraries that are installed on the CHPC systems, which are designed to ease the programming and speed-up scientific applications. First, we will talk about BLAS, which is a standardized library of Basic Linear Algebra Subroutines, and present few examples. Then we briefly focus on other libraries that are in use, including freeware LAPACK, ScaLAPACK, PETSc and FFTW.


New CHPC Account Creation Process Now Online

Posted: October 18, 2010

The CHPC account application process is now available online. Although we will still accept the old paper form, we encourage everyone to use the new system as the process is much faster.

The process may be initiated either by the applicant or by the PI (faculty advisor) or their CHPC recognized delegate for whom the applicant will be working. Once the online form has been completed and submitted to CHPC, the system will automatically request verification of the applicant’s email address by sending an email to the applicant and waiting for a response. An applicant’s request will not be processed further without a response to this email verification. Also, if the applicant initiates the process, a request for approval of the application will be sent via e-mail to the faculty advisor. This approval must also be completed before the account will be created.

CHPC policy requires that the faculty advisors of any non-faculty CHPC account holders, including post-docs, also have CHPC accounts. If the applicant identifies a faculty member who does not yet have an account, the system will create an account after CHPC has verified the faculty member’s status and email. We encourage faculty members who wish to create accounts for themselves and their research group members to first speak with CHPC’s associate director, Julia Harrison, so we can identify the resource needs.

The link for the online application is available at the following URL:
https://www.chpc.utah.edu/apps/profile/account_request.php


CHPC Presentation : Chemistry Packages at CHPC

Posted: October 15, 2010

Date: October 21, 2010

Time: 1:00 p.m.

Location: INSCC Auditorium (Rm 110)

Presented by: Anita Orendt

This talk will focus on the computational chemistry software that is available on CHPC clusters. The talk will be an overview of the packages, their capabilities, their organization at CHPC, as well as provide details on how to access the packages on CHPC computer resources.


**IMPORTANT change to local disk on all CHPC clusters' compute nodes**

Posted: October 4, 2010

Event date: October 12, 2010

During the upcoming downtime, October 12th, 2010 we will be implementing a significant change in how we deploy the local disk resource on the compute nodes of all of the clusters. This change will improve node reliability in cases where the /tmp space fills up or there is a local disk failure. First, /tmp will be relocated to RAM disk using the tmpfs file system and will be limited to 128Mb. The /tmp space will be completely scrubbed in between jobs. This space will be very fast, and is intended for small, truly temporary files. Secondly the balance of the available disk (what is currently located at /tmp) will be made available in the /scratch/local location, and will be aggressively scrubbed of files more 7 days old.

**PLEASE NOTE IF YOU OWN NODES ON sanddunearch you have the option of turning the scrubbing off. Please let us know if you have that need, otherwise the same policies will apply to your nodes as described above.

Any of you who use the local disk on the compute nodes (writing files to /tmp), will need to change their scripts appropriately to reflect these changes, that is, replace any occurrence of /tmp with /scratch/local. Please let us know if you have questions or need any assistance by sending email to issues@chpc.utah.edu.


CHPC Presentations: Introduction to Programming with OpenMP

Posted: October 4, 2010

Event date: October 7, 2010

CHPC Presentation

Introduction to Programming with OpenMP
Date: October 7th, 2010
Time: 1:00 p.m.
Location: INSCC Auditorium (Rm 110)
Presented by: Martin Cuma

This talk introduces OpenMP, an increasingly popular and relatively simple shared memory parallel programming model. Two parallelizing schemes, parallel do loops and parallel sections, were detailed using examples. Various clauses that allow user to modify the parallel execution were also presented, including sharing and privatizing of the variables, scheduling, synchronization and mutual exclusion of the parallel tasks. Finally, few hints were given on removing loop dependencies in order to obtain effective parallelization.


CHPC Major Downtime: Tuesday October 12, 2010 ALL DAY **IMPORTANT change to local disk on compute nodes**

Posted: September 29, 2010

Event date: October 12, 2010

Duration: 7:00 a.m. until late afternoon or evening

Systems Affected/Downtime Timelines:

  • All HPC Clusters
  • Intermittent outage of some of the CHPC supported networks
  • All CHPC supported desktops mounting CHPCFS file systems.
  • More explicit details will be sent as the downtime approaches.

Instructions to User:

  • Expect intermittent outages of the CHPC supported networks.
  • All of the Desktops mounting the CHPCFS file systems will be affected. Plan for CHPCFS to be unavailable for a good part of the day. Those with Windows and Mac desktops should be able to function, but may not have access to the CHPCFS file systems. CHPC recommends that you reboot your desktops after the downtime.
  • All HPC Clusters will be down most of the day.

During this downtime maintenance will be performed in the datacenters, requiring many systems to be down most of the day. CHPC will take advantage of this down time to do a number of additional tasks, including work on the network and file servers.

All file systems served from CHPCFS will be unavailable for most of the day. This includes HPC home directory space as well as departmental file systems supported by CHPC. We will work to get things online as soon as possible.

Important change to local disk on compute nodes

During this downtime we will be implementing a significant change in how we deploy the local disk resource on the compute nodes of all of the clusters. This change will improve node reliability in cases where the /tmp space fills up or there is a local disk failure. First, /tmp will be relocated to RAM disk using the tmpfs file system and will be limited to 128Mb. The /tmp space will be completely scrubbed in between jobs. This space will be very fast, and is intended for small, truly temporary files. Secondly the balance of the available disk (what is currently located at /tmp) will be made available in the /scratch/local location, and will be aggressively scrubbed of files more 7 days old.

**PLEASE NOTE IF YOU OWN NODES ON sanddunearch you have the option of turning the scrubbing off. Please let us know if you have that need, otherwise the same policies will apply to your nodes as described above.

Any of you who use the local disk on the compute nodes (writing files to /tmp), will need to change their scripts appropriately to reflect these changes, that is, replace any occurrence of /tmp with /scratch/local. Please let us know if you have questions or need any assistance by sending email to issues@chpc.utah.edu.


System Changes at CHPC - New cluster coming, old clusters to be retired October 4th, 2010

Posted: September 23, 2010

Event date: October 4, 2010

The good news:

CHPC is Continuing the process of procuring a new cluster which will replace our oldest clusters. The new system will be a shared resource between the ICSE (Phil Smith) research group and the general campus community. We are hoping to move quickly and hope to have a new system available for users in just a few months. The general campus portion of the new system is expected to be roughly 4 times the compute capacity of the retiring systems, and will be interconnected with an infiniband network.

The "bad" news:

To make room for the new cluster (space, heating and cooling constraints) we will need to retire our oldest clusters, namely delicatearch, tunnelarch and landscapearch. We plan to shut these clusters down the morning of October 4th, 2010 to prepare the machine room space for the new system in a timely manner. These old systems currently total 2.2 Tflops, where the general campus portion of the new system is estimaged to be roughly 9.3 Tflops!

Also note that the old /scratch files systems /scratch/ta, /scratch/da, /scratch/serial-old and /scratch/serial-pio will be retired at the same time as the old clusters. That is on October 4th, 2010 they will no longer be available from any of the CHPC clusters. Please retrieve any data you may wish to keep from these file systems before October 4th, 2010.


CHPC Presentation: HIPAA Environment, AI and NLP Services at CHPC, Thurs. 9/23, 2 p.m., HSEB 2908

Posted: September 20, 2010

Event date: September 23, 2010

CHPC Presentation

HIPAA Environment, AI and NLP Services at CHPC

Date: Thursday, September 23, 2010
Time: 2:00 p.m. until 3:00 p.m.
Location: HSEB 2908

by Sean Igo

This presentation is an overview of the equipment and software presently available at CHPC for Natural Language Processing (NLP). It will also cover related resources for general Artificial Intelligence use such as machine learning and data mining, and will include a brief descripton of CHPC's general resources and how to access them.


CHPC Presentation: Introduction to I/O in the HPC Environment, Thur 9/16, 1 p.m., INSCC Auditorium (RM110)

Posted: September 13, 2010

Event date: September 16, 2010

Date: Thursday, September 16, 2010
Time: 1:00 p.m. until 2:00 p.m.
Location: INSCC Auditorium

by Brian Haymore and Sam Liston

This presentation will introduce users to the I/O performance characteristics of the various types of storage available to users of CHPC systems. Our goal for the presentation is to increase user understanding of various I/O patterns and how they relate to the shared computing environments at CHPC.

Topics will include general overview of I/O, best practices, file operations, application I/O, as well as troubleshooting techniques. We will present examples to illustrate the impact and performance characteristics of common usage cases. We will conclude with an open discussion driven by users' questions.


School's in session, viruses are increasing - please check and update machines

Posted: September 10, 2010

Over the past month as school has come back in session, system administrators and security administrators have noted increased virus/trojan activity at the University of Utah, other universities and even at collaborative scientific sites. We would like to respectfully request that all research users and collaborators take a little time to follow our recommended steps to deal with trojans / viruses.


CHPC Presentation: Introduction to Parallel Computing, 9/9/2010, 1:00 p.m., INSCC Auditorium (RM110)

Posted: September 9, 2010

Event date: September 9, 2010

Introduction to Parallel Computing

Date: September 9th, 2010
Time: 1:00 p.m.
Location: INSCC Auditorium (Room 110)
by Martin Cuma

In this talk, we first discuss various parallel architectures and note which ones are represented at the CHPC, in particular, shared and distributed memory parallel computers. A very short introduction into two programming solutions for these machines, MPI and OpenMP, will then be given followed by instructions on how to compile, run, debug and profile parallel applications on the CHPC parallel computers. Although this talk is more directed towards those starting to explore parallel programming, more experienced users can gain from the second half of the talk, that will provide details on software development tools available at the CHPC.


CHPC Fall 2010 Presentations to Begin September 2nd

Posted: August 26, 2010

Event date: September 2, 2010

CHPC Fall 2010 Presentation Schedule

**All Welcome**

All presentations are on Thursdays at 1:00 p.m. in the INSCC Auditorium unless otherwise specified.

* 9/2 Overview of CHPC (by Wim Cardoen)
* 9/9 Introduction to Parallel Computing (by Martin Cuma)
* 9/16 Introduction to IO in the HPC Environment (by Brian Haymore and Sam Liston)
* 9/23 HIPAA Environment, AI and NLP Services at CHPC (by Sean Igo) Time: 2:00 p.m., Location: HSEB 2908 (Health Science Education Building)
* 9/30 Introduction to programming with MPI (by Martin Cuma)
* 10/7 Introduction to Programming with OpenMP (by Martin Cuma)
* 10/14 **FALL BREAK** No presentation
* 10/21 Chemistry Packages at CHPC (by Anita Orendt)
* 10/28 Mathematical Libraries at CHPC (by Martin Cuma)
* 11/4 Statistical Resources at CHPC (by Byron Davis)
* 11/11 **NEW** Using Python for Scientific Computing (by Wim Cardoen)
* 11/18 Using Gaussian09 and Gaussview (by Anita Orendt)
* 11/25 **HOLIDAY BREAK** No presentation
* 12/2 **NEW** High-Performance Networks and Long-Distance Data Transfers (by Tom Ammon)
* 12/9 Debugging with Totalview (by Martin Cuma)
* 12/16 Hybrid MPI-OpenMP Programming (by Martin Cuma)

For full details: http://www.chpc.utah.edu/docs/presentations.


CHPC Allocation Requests Due Sept 10, 2010

Posted: August 18, 2010

Requests for a CHPC computer allocation for the quarter starting Oct 1, 2010 are due September 10, 2010. The electronic allocation request form can be accessed either via the link on http://www.chpc.utah.edu/docs/forms/allocation.html or directly from jira.chpc.utah.edu.

Please note that CHPC is anticipating retiring Delicatearch, Tunnelarch, and Landscapearch sometime early in the next quarter. For those of you whose group has been heavily using these freecycle clusters, you may want to consider this retirement in evaluating your allocation needs for the upcoming quarter.

As always, direct any questions or concerns to issues@chpc.utah.edu.


Retirement of old cluster and scratch systems POSTPONED

Posted: July 13, 2010

The previously announced retirement of tunnelarch, delicatearch, and landscapearch and the scratch systems /scratch/ta, /scratch/da, /scratch/serial-old, and /scratch/serial-pio originally scheduled for Monday July 19th has been postponed. Once a new date is decided upon, another announcement will be made.


Coming Soon: Paperless Account Creation

Posted: July 13, 2010

In order to streamline the process of creating new CHPC accounts and to save paper, we will be releasing our Paperless Account Creation application soon. With this change, prospective users will be able to request an account through our web site and their PI will be able to approve the request electronically.

Moving away from printed forms will help us use less paper and will potentially allow users to get their accounts sooner.

If you would like to help us test the process and you have any new users that need an account, please send us an email at issues@chpc.utah.edu and we’ll get you set up.


Clusters Back Up after Outage due to Cooling Failure

Posted: July 2, 2010

Last night at about 2am the cooling failed in the Komas Datacenter; this required CHPC to take down the clusters. The cooling was restored overnight and the clusters were brought back online this morning. As of about 10:30am all clusters are up and running jobs.


Retirement of old scratch file servers: /scratch/ta; /scratch/da; /scratch/serial-old; and /scratch/serial-pio on July 19th, 2010 - Retrieve any data you may wish to keep from these file systems before then

Posted: July 2, 2010

In conjunction with the older cluster retirements, the following /scratch file systems will be retired: /scratch/ta; /scratch/da; /scratch/serial-old; and /scratch/serial-pio . That is on July 19th, 2010 they will be offlined and unavailable to users. Please retrieve any data you may wish to keep from these file systems before Monday July 19th, 2010.


System Changes at CHPC - New cluster coming, old clusters to be retired 7/19/2010

Posted: July 1, 2010

The good news:

CHPC is starting the process of procuring a new cluster which will replace our oldest clusters. The new system will be a shared resource between the ICSE (Phil Smith) research group and the general campus community. We are hoping to move quickly and hope to have a new system available for users in just a few months. The general campus portion of the new system is expected to be roughly 4 times the compute capacity of the retiring systems, and will be interconnected with an infiniband network.

The "bad" news:

To make room for the new cluster (space, heating and cooling constraints) we will need to retire our oldest clusters, namely delicatearch, tunnelarch and landscapearch. We plan to shut these clusters down the morning of July 19th, 2010 to prepare the machine room space for the new system in a timely manner. These old systems currently total 2.2 Tflops, where the general campus portion of the new system is estimaged to be roughly 9.3 Tflops!

Also note that the old /scratch files ystems /scratch/ta, /scratch/da, /scratch/serial-ol and /scratch/serial-pio will be retired at the same time as the old clusters. That is on July 19th, 2010 they will no longer be available from any of the CHPC clusters. Please retrieve any data you may wish to keep from these file systems before July 19th, 2010.


Updraft cluster is up

Posted: June 25, 2010

The Updraft cluster is up and running jobs. There is a caveat, though. The update of the Qlogic driver set has relocated MPI distribution. We have globally set the default MPI to Qlogic with Intel compilers, which now lives in /usr/mpi/qlogic/bin. Previously it was in /usr/bin. So, if some of you used the full path instead of just e.g. mpirun, then start using either just mpirun, or the full path /usr/mpi/qlogic/bin/mpirun. We have tried a few executables and they should work without having to rebuild (the Qlogic MPI uses dynamic linking), but, if you experience problems please rebuild the executable. If problems persist, please, e-mail us at issues@chpc.utah.edu.


CHPC clusters back up after power outage

Posted: June 25, 2010

Systems Affected/Downtime Timelines: CHPC clusters except for Updraft are back after the power outage last night. If you notice any problems, please, e-mail issues@chpc.utah.edu.


UPDATE - Power outage at KOMAS

Posted: June 24, 2010

Rocky Mountain Power is at this time estimating that power will not be restored until later tonight. Therefore CHPC will wait until the morning to bring the clusters back on line, provided that the power is restored and stable at that time. Another update will be posted in the morning with further details.


Power Outage at Komas - 3:50 pm 6/24/2010

Posted: June 24, 2010

Duration: Unknown

Systems Affected/Downtime Timelines: All clusters

The Komas Data Center just lost power. All clusters are down. More information will be posted as soon as it becomes available.


Incremental Backups down due to hardware failure - 6/24/2010 - restored by 1:47 P.M on 6/25/2010

Posted: June 24, 2010

Systems Affected/Downtime Timelines: Incremental backups

For those of you who have home directory file servers supported by CHPC, please note that a critical hardware component of our backup robot has failed. IBM is shipping the new part and we expect to install it and be running incremental backups again by sometime tomorrow. We do have our full backup from this past weekend, but just a heads up that any newer files inadvertently removed may not be able to recovered. The backup file systems impacted by this are:

  • apexarch.arches (homerfs, poet, warner) dirs
  • webapps.bmi dirs
  • bmi-facelli dirs
  • operations (for rhn,vms, www.chpc, & various chpc monitoring systems.
  • astro-home dirs
  • bio-cheatham-home dirs
  • geo-smithRB-home dirs
  • geo-thorne-home dirs
  • chem-molinero-home dirs
  • chpc-facelli-home dirs
  • met-home dirs + various met(steenburgh, horel) dirs
  • tomofs-home dirs
  • chpc-vis

The backup system was returned to normal operation by 1:47 p.m. on Friday June 25th, 2010.


Updraft still down: ETA mid-day tomorrow (6/24/2010)

Posted: June 23, 2010

The updraft cluster is still down while we work with Infiniband drivers required for important file system testing. We expect to have the cluster operational sometime tomorrow (6/24/2010). We expect to have another downtime in early July 2010 to upgrade updraft to the RedHat version 5 operating system.


Reminder: CHPC Major Downtime: Tuesday, June 22, 2010 ALL DAY

Posted: June 9, 2010

Event date: June 22, 2010

Duration: 7:30 a.m. until late afternoon or evening

Systems Affected/Downtime Timelines:

  • All HPC Clusters
  • Intermittent outage of all of the CHPC supported networks
  • Most CHPC supported desktops mounting CHPCFS file systems.
  • More explicit details will be sent as the downtime approaches.

Instructions to User:

  • Expect intermittent outages of the CHPC supported networks.
  • Some of the Desktops mounting the CHPCFS file systems will be affected. Plan for CHPCFS to be unavailable the first half of the day and possibly until 3 p.m. Those with Windows and Mac desktops should be able to function, but may not have access to the CHPCFS file systems. CHPC recommends that you reboot your desktops after the downtime.
  • All HPC Clusters will be down most of the day.

During this downtime maintenance will be performed on the cooling system in the Komas datacenter, requiring all clusters housed in the data center to be down most of the day. CHPC will take advantage of this down time to do a number of additional tasks, including work on the network and file servers.

ALL CHPCFS filesystems will be down the first half of the day. Some file systems served from CHPCFS will be unavailable until mid-afternoon. This includes HPC home directory space as well as departmental file systems supported by CHPC. We will work to get things online as soon as possible.

More explicit details will follow as we more finely define the scope of the outage.


Systems restored after power issues in Komas Data Center

Posted: May 23, 2010

As of about 3:30pm all clusters are back up and running jobs after the power issues that started last night.


Unexpected Downtime: Power Issues at Komas DataCenter

Posted: May 23, 2010

Duration: Started late May 22, 2010 - still ongoing

Systems Affected/Downtime Timelines: All clusters are affected

Instructions to User:

As mentioned in messages sent out over the last 12 or so hours, there were power issues (brown outs) at the Komas Data Center last night, as leading to systems taking down the clusters. Systems and networking personnel are on site and working on bringing the systems back online. We will send out an additional message when the clusters are available for use.


Change to Campus AD on Window and Mac Samba mounts of CHPC filesystems

Posted: May 19, 2010

The change to the campus authentication for samba mounts of CHPC file systems on Windows and MAC desktops has been completed. For users with mapped drives you should disconnect and then reconnect to the mapped drive using ad\unid as your login along with your campus password.

For CHPC supported Windows systems: for now continue to login to the INSCC domain as you have been doing until somebody from CHPC comes by to make the necessary changes.


Allocations requests for Summer quarter due June 10, 2010

Posted: May 19, 2010

Allocation requests for the next quarter (July - September 2010) are due June 10, 2010.

Please refer to CHPC's allocation policy for details and to the application form. There are also links to both of these locations on the main CHPC web page. Note that allocation requests can be done electronically via either the link on the application form page or the jira issue tracking system by choosing allocation request as the issue type when creating a new issue.


DOWNTIME of CHPC maintained Windows and MAC Desktops OR Windows/MAC Samba mounts of CHPC File Systems

Posted: May 17, 2010

Event date: May 19, 2010

Duration: approx 1 hour

Systems Affected/Downtime Timelines: On Wednesday May 19, 2010, starting at 8AM

CHPC is nearing the goal of a ‘Single Userid and Password’ on our systems. This week’s change ONLY affects users with CHPC maintained Windows or MAC desktops or users with samba mounts to CHPC file systems on their Windows or MAC desktops.

Instructions to User:

For Windows desktops: The authentication for logging into your desktop will change from the INSCC domain to the campus domain. The campus domain is AD. Therefore your login will be: ad\unid and your password will be your C.I.S. (campus) password. In addition, the authentication for any samba mounted drives will also change to the campus domain (ad\unid) and your C.I.S. password. On Windows desktops once the machine is joined to the new domain, you will need to log in to create a new profile. Then your files will need to be moved to your new profile. A CHPC admin will assist you with this. You will need to re-establish your mounts using your campus authentication (ad\unid) and C.I.S. password once we have made the change. The path to all samba mounted drives will not change, just the authentication.

Once we are done, any password changes will be done through the C.I.S. webpage and all password issues will be handled by the campus helpdesk.

For MAC desktops: At this time ONLY the authentication for your samba mount will be changing; the login to your desktop WILL stay the same. The authentication for any samba mounted drives will change to the campus domain (unid) and your C.I.S. password. You will need to re-establish your mounts using your unid and C.I.S. password once we have made the change. The path to all samba mounted drives will not change, just the authentication.

We ask that you be logged off your desktop before 8AM; we expect that you will be able to log back on to your desktops starting at 9AM.

As always, please let us know of any questions or concerns you have about these changes.


Jira Problem Tracking System Available as of 10:30 AM, May 3, 2010

Posted: May 3, 2010

The Jira maintenance has been completed as of 10:30 AM. We are now running Jira 4.1. Please send any error messages or other issues to us at issues@chpc.utah.edu.


Upgrade to Jira Problem Tracking System Planned Monday, May 3, 2010 from 9:00 AM to 10:00 AM

Posted: April 28, 2010

Event date: May 3, 2010

We will be upgrading our Jira problem tracking system this coming Monday, May 3. We will take down the old version at 9:00 AM and should have the new version up by 10:00 AM. We'll be sure to keep you posted if this maintenance window changes.

During the outage, you will be unable to access the system at https://jira.chpc.utah.edu/, but any emails sent to issues@chpc.utah.edu during this time should be processed once the new version is up and running.

The upgrade comes with some drastic changes to the user interface, particularly in the dashboard. We will try to minimize any interruption for people using the default system dashboard, but users with custom dashboards may have to remove or replace some of the pieces of their custom dashboards. You can find the documentation for customizing your dashboard at http://confluence.atlassian.com/display/JIRA/Customising+the+Dashboard.

As always, please contact us via email at issues@chpc.utah.edu or phone at 801-585-3791 if you have any problems, questions, or feedback.


*NEW* CHPC Presentation: Introduction to I/O in the HPC Environment, Thursday, April 29th, 1:00 p.m., INSCC Auditorium

Posted: April 20, 2010

Event date: April 29, 2010

Introduction to I/O in the HPC Environment

Date: April 29th, 2010
Time: 1:00 p.m.
Location: INSCC Auditorium
Presented by: Brian Haymore and Sam Liston

This presentation will introduce users to the I/O performance characteristics of the various types of storage available to users of CHPC systems. Our goal for the presentation is to increase user understanding of various I/O patterns and how they relate to the shared computing environments at CHPC.

Topics will include general overview of I/O, best practices, file operations, application I/O, as well as troubleshooting techniques. We will present examples to illustrate the impact and performance characteristics of common usage cases. We will conclude with an open discussion driven by users' questions.


Two CHPC Presentations this week, Thurs. 4/15, 1 p.m., Telematic Collaboration with the Access Grid (INSCC Auditorium), and HIPAA Environment, AI and NLP Services at CHPC (HSEB 2908)

Posted: April 12, 2010

Event date: April 15, 2010

CHPC will present two presentations this week. Note the separate locations.

Telematic Collaboration with the Access Grid

Date: April 15th, 2010
Time: 1:00 p.m.
Location: INSCC Auditorium

by Jimmy Miklavcic and Beth Miklavcic

The development of the InterPlay performance series began in 1999 and was built upon the Access Grid infrastructure. It was followed by an emerging first public performance in 2003 and continues to date. It has created many unique challenges and at present, the challenges have matured and multiplied with each subsequent performance. This developmental process, the issues surmounted and those currently being addressed are discussed in this presentation. Beth and Jimmy Miklavcic will provide an overview of the InterPlay performances from 2003 to 2010 and a preview of the Cinematic Display Control Interface being developed by undergraduate researchers.


HIPAA Environment, AI and NLP Services at CHPC

Date: April 15th, 2010
Time: 1:00 p.m.
Location: HSEB 2908

by Sean Igo

This presentation is an overview of the equipment and software presently available at CHPC for Natural Language Processing (NLP). It will also cover related resources for general Artificial Intelligence use such as machine learning and data mining, and will include a brief descripton of CHPC's general resources and how to access them.


CHPC Presentation: , Thursday, April 8th, 1:00 p.m., INSCC Auditorium

Posted: April 5, 2010

Using Gaussian03 and Gaussview

Presentation Date: April 8th, 2010
Presentation Time: 1:00 p.m.
Location: INSCC Auditorium
Presented by Anita Orendt

This presentation will focus on the use of Gaussian03 and Gaussview on the CHPC clusters. Batch scripts and input file formats will be discussed. Parallel scaling and timings with the different scratch options (TMP, MM, SERIAL, PARALLEL) will also be presented, along with a discussion of scratch needs of Gaussian03. Finally several demonstrations on the use of GaussView to build molecules, input structures, set up input files and to analyze output files will be presented.


CHPC Downtime complete

Posted: March 23, 2010

Clusters are up and scheduling jobs. Access to all file systems has been restored. Please let us know if you see any issues by sending email to issues@chpc.utah.edu.

A few reminders:

  • If you have problems with desktop access, please try rebooting and see if that resolves the problem. If not, please contact us.
  • Please do not run significant jobs on interactive nodes. Any job requiring more than 15 minutes of walltime should be run through the batch system.
  • Please do not write large I/O to home directories. The /scratch file systems are there for this purpose.

Thanks for your patience.


CHPC Presentation: Introduction to Parallel Computing, Thursday 3/18/10 at 1:00 p.m., INSCC Auditorium

Posted: March 16, 2010

Event date: March 18, 2010

CHPC Presentation Series

Presented by Martin Cuma
Date: Thursday March 18, 2010
Time: 1:00 p.m.
Location: INSCC Auditorium

In this talk, we first discuss various parallel architectures and note which ones are represented at the CHPC, in particular, shared and distributed memory parallel computers. A very short introduction into two programming solutions for these machines, MPI and OpenMP, will then be given followed by instructions on how to compile, run, debug and profile parallel applications on the CHPC parallel computers. Although this talk is more directed towards those starting to explore parallel programming, more experienced users can gain from the second half of the talk, that will provide details on software development tools available at the CHPC.


CHPC Presentation: Overview of CHPC, Thursday 3/11/10 at 1:00 p.m., INSCC Auditorium

Posted: March 9, 2010

Event date: March 11, 2010

CHPC Presentation Series

Presented by Wim Cardoen
Date: 3/11/10
Time: 1:00 p.m.
Location: INSCC Auditorium

This presentation gives users new to CHPC, or interested in High Performance Computing an overview of the resources available at CHPC, and the policies and procedures to access these resources.

Topic covered will include:

  • The platforms available
  • Filesystems
  • Access
  • An overview of the batch system and policies
  • Service Unit Allocations


All clusters back up and running jobs

Posted: March 5, 2010

All of the clusters are back up and running jobs. Please let us know if you notice any problems.


Unscheduled Power outage at Komas Datacenter

Posted: March 5, 2010

Duration: started at about 3:45am March 5, 2010

Systems Affected/Downtime Timelines: All clusters and systems housed in the Komas Datacenter

At about 3:45am there was a power outage in Research Park. This affected the Komas Datacenter. Power was restored and stabilized by about 9:30am. CHPC staff has been on site and have been working on starting the process to bring everything back on line. Currently they are working on recovering the scratch filesystem; this is the first step to be able to then move on to bring the clusters themselves back online. It typically takes a couple of hours to bring up the clusters, run verification scripts, etc. and as there have been hardware failures due to the power outage, it may take even longer today. The bottom line is that we ask for your patience and that it will most likely be fairly late this afternoon before the clusters are back online. A new message will be posted when the clusters are available. As always, any questions and/or concerns can be sent to issues@chpc.utah.edu


CHPC Spring Presentation Schedule begins March 11th, 2010: 1 p.m. in INSCC Auditorium (rm 110)

Posted: March 3, 2010

Event date: March 11, 2010

CHPC Spring 2010 Presentation Schedule

All Welcome!

All presentations are held on Thursday at 1:00 p.m. in the INSCC Auditorium, unless otherwise specified. Note: schedule may be subject to change.

  • March 11: Overview of CHPC - by Wim Cardoen
  • March 18: Intro to Parallel Computing - by Martin Cuma
  • March 25: **SPRING BREAK**
  • April 1: Intro to MPI - by Martin Cuma
  • April 8: Gaussian - by Anita Orendt
  • April 15: HIPPA Environment, AI and NLP - by Sean Igo (may be scheduled on upper campus)
  • April 15: (in auditorium) Interplay - by Jimmy and Beth Miklavcic
  • April 22: TBD

Batch System Downtime on Sanddunearch, Tunnelarch and Landscapearch

Posted: January 28, 2010

Duration: Starts 9AM Monday February 1, 2010

The batch system on Sanddunearch, Tunnelarch, and Landscapearch have reservations in place to drain the running jobs by 9AM Monday February 1, 2010. No jobs in the queue will start unless they will finish before then.

During this downtime the same changes that were made to the kernel on the Delicatearch compute nodes on Tuseday evening will be made on the other clusters that were migrated to RH5. This change is necessary to resolve the issue we are experiencing with these nodes ending up in a bad state and needing to be rebooted while running jobs.

Note that there are some currently running jobs that may still be running on Monday morning. In these cases we will allow the jobs to finish before these nodes will be rebooted with the changes to the OS.


Delicate Arch Batch Downtime Finished

Posted: January 26, 2010

Work on Delicatearch compute nodes has been completed

As all but eight of the Delicatearch compute nodes were empty at about 10pm, systems made the changes and rebooted these nodes. The remaining eight nodes have reservations set and will be dealt with in the morning. The reservation on the remainder has been released as of 11pm and scheduling of these nodes has been resumed.


Delicatearch Batch system downtime 8AM Thursday January 28, 2010

Posted: January 25, 2010

A reservation is in place to drain the batch queue on Delicatearch only by 8AM on Thursday January 28, 2010 to allow for modifications and testing to try to resolve issues that have arisen since the move to RH5. Any job submitted to the batch system that will not finish by this time will not start until this reservation is released.


Sanddunearch open

Posted: January 19, 2010

Sanddunearch is back online and scheduling jobs. Although much has changed inside, the transition should be relatively smooth from user perspective. We have updated all the MPIs that use Infiniband and use GNU compilers (MVAPICH, MVAPICH2, OpenMPI). Note that now we are using gfortran for Fortran rather than g77 so change your Makefiles to gfortran if g77 is there. Since all these three MPIs use dynamic libraries recompiling old executables should not be necessary but in case it does not run, recompile first before contacting us.
As for MPIs built with PGI and Intel compilers, we'll build those tonight so they should be ready by late tonight. Again, no changes should be needed in your Makefiles. We have tested some non-InfiniBand apps and they run alright as well. Make sure to use the RHEL5 MPICH2 if running over the Ethernet, but, we are discouraging from running over Ethernet since you get better performance with the InfiniBand.
As always, please, let us know if there's any problem at issues@chpc.utah.edu.


sanddunearch DOWNTIME - Tuesday January 19th from 9 a.m. until late afternoon or evening

Posted: January 13, 2010

Duration: Approximately 8 hours

Systems Affected/Downtime Timelines: The only system affected will be the sanddunearch cluster.

Instructions to User:

CHPC is continuing the process of migrating all of our systems from RedHat4 to RedHat5. This downtime is to migrate sanddunearch to RedHat5. As part of this conversion, please note that any files in /tmp (on sanddunearch) will be scrubbed as we must reformat the filesystem to a newer version. Any jobs remaining in the sanddunearch queue at the beginning of the downtime will be lost and will need to be resubmitted after the downtime.


Delicatearch and Landscapearch Available for Use

Posted: January 11, 2010

Both Delicatearch and Landscapearch are available for use. Both system are running RH5, so the comments made in the earlier email about Tunnelarch apply.


Tunnelarch Now on RedHat 5 and Available to Users

Posted: January 11, 2010

The operating system on tunnelarch has been ungraded to RedHat5 and this cluster is once again available to users. There are, however, a few changes that users should consider:

  • The cluster is no longer allocated (this change was made Jan 1, 2010)
  • The new walltime limit is 10 days; the long qos will no longer be available
  • Some packages may need to be rebuilt. CHPC has tested packages in the /uufs/chpc.utah.edu location.
  • For packages using mpich2, there are new builds available for RH5 - make sure your scripts are pointing to these:
    • /uufs/chpc.utah.edu/sys/pkg/mpich2/1.2r5/bin - for gnu compiler
    • /uufs/chpc.utah.edu/sys/pkg/mpich2/1.2ir5/bin - for intel compiler
    • /uufs/chpc.utah.edu/sys/pkg/mpich2/1.2pr5/bin - for pgi compiler
  • For users of NWChem - see CHPC web page for a new build for RH5

The migration of delicatearch and landscapearch to RH5 is underway and these clusters will be made available as soon as possible. The myrinet interconnect on delicatearch WILL NOT be available when these clusters are returned to service.

As always, if there are any questions or concerns about these changes, please contact us.