2009 CHPC News Announcements

Older clusters off allocation control beginning January 1st, 2010 (delicatearch and tunnelarch)

Posted: December 14, 2009

Beginning with next calendar quarter allocations (January 1, 2010), the older clusters (delicatearch and tunnelarch) will be off allocation control. What this means is that jobs on those two clusters will be run with a quality of service of "freecycle". The result will be that jobs will be scheduled in a FIFO like manner (first in first out) with some backfill (see the Note below for more info on backfill).

For research group with existing "arches" allocations, your "arches" awards will be scaled proportional to the lost cycles (to 42% of prior level). For example, if you had a general arches allocation of 10,000 SU, your new allocation would be for 4,200 and will only give you priority on the sanddunearch cluster. The NIH block grants will be discontinued. For groups requesting new allocations on arches, expect deep cuts to compensate for the 2 clusters going off allocation control.

Note: Backfill just means that the scheduler is smart enough to fill in idle nodes with small jobs that can fit, without delaying the start of the job who is holding the job for a reserved spot. So it won't be strictly FIFO. For example, if a job ahead of your job needs 100 nodes, and it is holding a reservation on nodes until there are 100 free, there will most likely be a set of nodes sitting idle until it can run the 100 node job. The scheduler is smart enough to look and see when the next jobs will finish, and predict that your job (only asking for 8 nodes) can fit on those reserved nodes (without delaying the 100 node job). So it will go ahead and run your job.


CHPC Presentations:Hybrid MPI-OpenMP Programming, 12/3/09, 1 p.m., INSCC Auditorium

Posted: December 1, 2009

Hybrid MPI-OpenMP Programming
December 3rd, 2009
Location: INSCC Auditorium
Presented by: Martin Cuma

In this talk we will introduce hybrid MPI-OpenMP programming model designed for distributed shared memory parallel (DSMP) computers. The CHPC clusters are representative of this family having two or more shared memory processors per node. OpenMP generally provides better performing alternative for parallelization inside a node and MPI is used for communication between the distributed processors. We will discuss cases when hybrid programming model is beneficial and provide examples of simple MPI-OpenMP codes.


CHPC Presentation: Gaussian and Gaussview

Posted: November 17, 2009

Event date: November 19, 2009

CHPC Presentation Series

Gaussian and GaussView
Date: November 19, 2009
Time: 1:00 - 2:00 p.m.
Location: INSCC Auditorium
Presented by: Anita Orendt

This talk will focus on the use of the Gaussian and GaussView software packages at CHPC. The presentation will provide details on the new features of Gaussian09 and the differences between Gaussian03 and Gaussian09 along with information about scaling of the software on CHPC computational resources.


CHPC Presentation: NLP Services at CHPC, 10/29, 1 p.m., INSCC Auditorium

Posted: November 2, 2009

CHPC Presentation Series

NLP Services at CHPC
Date: November 5, 2009
Time: 1:00 - 2:00 p.m.
Location: INSCC Auditorium
Presented by: Sean Igo

This presentation is an overview of the equipment and software presently available at CHPC for Natural Language Processing (NLP). It will also cover related resources for general Artificial Intelligence use such as machine learning and data mining, and will include a brief descripton of CHPC's general resources and how to access them.


CHPC Presentation: Telematic Collaboration with the Access Grid, 10/29, 1 p.m., INSCC Auditorium

Posted: October 26, 2009

Date: October 29nd, 2009 **Corrected**
Time: 1:00 - 2:00 p.m.
Location: INSCC Auditorium
Presented by: Beth Miklavcic, Jimmy Miklavcic, Josh Bross and Colin McDermott

The development of the InterPlay performance series began in 1999 and was built upon the Access Grid infrastructure. It was followed by an emerging first public performance in 2003 and continues to date. It has created many unique challenges and at present, the challenges have matured and multiplied with each subsequent performance. This developmental process, the issues surmounted and those currently being addressed are discussed in this presentation. Beth and Jimmy Miklavcic will provide an overview of the InterPlay performances from 2003 to 2009 and a preview of the Cinematic Display Control Interface being developed by undergraduate researchers Josh Bross and Colin McDermott.


CHPC Downtime completed sucessfully about 7:15 p.m. (10/13/09) - cluster up and scheduling jobs

Posted: October 13, 2009

CHPC successfully completed the downtime today. The /scratch/mm space which had been deployed on the marchingmen cluster prior to retiring that system, has been re-deployed on tunnelarch as /scratch/ta and is available on the interactive and compute nodes of the tunnelarch cluster.

If you see any problems with your desktop, please reboot and see if the issue is resolved. If the problem persists, please let us know by sending email to issues@chpc.utah.edu


CHPC Major Downtime: Tuesday, October 13th, 2009 - ALL DAY

Posted: October 7, 2009

Duration: From 8:00 a.m. until evening, Tuesday October 13th, 2009

Systems Affected/Downtime Timelines:

  • All HPC Clusters
  • Intermittent outage of CHPC supported networks
  • All CHPC supported desktops mounting CHPCFS file systems.

Instructions to User:

  • Expect intermittent outages of the CHPC supported networks.
  • Desktops mounting the CHPCFS file systems plan for CHPCFS to be unavailable most of the day. Those with Windows and Mac desktops should be able to function, but will not have access to the CHPCFS file systems. CHPC recommends that you reboot your desktops after the downtime.
  • All HPC Clusters will be down most of the day.

During this downtime maintenance will be performed on the cooling system in the Komas datacenter, requiring all clusters housed in the data center to be down most of the day. CHPC will take advantage of this down time to do a number of additional tasks, including work on the network and file servers.

All file systems served from CHPCFS will be unavailable for a good part of the day. This includes HPC home directory space as well as departmental file systems supported by CHPC. We will work to get things online as soon as possible.


CHPC Presentation: Introduction to Programming with OpenMP, 10/08, 1 p.m., INSCC Auditorium

Posted: October 6, 2009

Date: Thursday, October 8th, 2009
Time: 1:00 - 2:00 p.m.
Location: INSCC Auditorium
Presented by: Martin Cuma

This talk introduces OpenMP, an increasingly popular and relatively simple shared memory parallel programming model. Two parallelizing schemes, parallel do loops and parallel sections, were detailed using examples. Various clauses that allow user to modify the parallel execution were also presented, including sharing and privatizing of the variables, scheduling, synchronization and mutual exclusion of the parallel tasks. Finally, few hints were given on removing loop dependencies in order to obtain effective parallelization.


CHPC Presentation: Introduction to Programming with MPI, 10/01, 1 p.m., INSCC Auditorium

Posted: September 29, 2009

Date: Thursday, October 1st, 2009
Time: 1:00 - 2:00 p.m.
Location: INSCC Auditorium
Presented by: Martin Cuma

This course discusses introductory and selected intermediate topics in MPI programming. We base this presentation on two simple examples and explain the MPI parallel development of them. The first example encompasses MPI initialization and simple point to point communication (which takes place between two processes). The second example includes introduction to collective communication calls (where all active processes are involved) and options for effective data communication strategies, such as derived data types and packing the data. Some ideas on more advanced MPI programming options are discussed in the end of the talk.


***REMINDER*** Updraft users: /scratch/general and /scratch/uintah will be scrubbed of all data at Noon on MONDAY 9/28/2009

Posted: September 15, 2009

***REMINDER***

In order to optimize performance on the /scratch/general and /scratch/uintah file systems (on the updraft cluster) CHPC will use the General DAT of updraft from Noon on September 28th until Noon on Monday, September 30th. ALL DATA FROM THESE FILESYSTEMS WILL BE SCRUBBED! Please make sure that any important results you may have in those spaces is copied elsewhere by Noon, Monday September 28th, 2009.

***REMINDER***


CHPC Presentation: Statistical Resources at CHPC, 9/17, 1 p.m., INSCC Auditorium

Posted: September 14, 2009

Date: September 17th, 2009
Time: 1:00 - 2:00 p.m.
Location: INSCC Auditorium
Presented by: Byron Davis

This presentation gives users (and potential users) of CHPC's statistical resources an overview of the equipment and software presently available. Additionally a list of specialized statistical software will be presented that we've supported over the past 10 years or so.


Mathworks seminar - Speeding up Matlab applications

Posted: September 10, 2009

Event date: September 23, 2009

Wednesday Sep. 23rd 2pm-5pm
110 INSCC (Auditorium)
Registration starts at 1.45pm

Speeding up Matlab applications

Mathworks engineers will be on site to discuss the following topics
- Understand memory usage and vectorization in Matlab
- Address bottlenecks in your programs
- Optimize file I/O to streamline your code
- Transition from serial to parallel Matlab programs

This should be a good seminar for Matlab users who are interested in making their code work faster and in parallelizing their code (CHPC has licenses for parallel Mathworks products), but every regular Matlab user should benefit from the topics discussed.

To register, please, visit: www.mathworks.com/seminars/uu0909
Walk-ins are welcome too.

For more information contact Alyssa Winer at Alyssa.Winer@mathworks.com


CHPC Presentation: Overview of CHPC, Thursday 9/10/09 at 1:00 p.m., INSCC Auditorium

Posted: September 8, 2009

This presentation gives users new to CHPC, or interested in High Performance Computing an overview of the resources available at CHPC, and the policies and procedures to access these resources.

Topic covered will include:

  • The platforms available
  • Filesystems
  • Access
  • An overview of the batch system and policies
  • Service Unit Allocations


Komas Datacenter Power Outage - Clusters down from 3 p.m. until about 7:45 p.m.

Posted: September 2, 2009

Duration: Power Dropped just before 3:00 p.m. on Wednesday 9/2 - clusters down from 3-7:45 p.m.

There was a power interruption in the Komas machine room about 3 p.m. on Wednesday, September 2nd, 2009, causing all of the clusters and related equipment to go down. Power was restored by about 4:00 and CHPC staff are on site bringing things back online. All of the clusters were back online and running jobs by about 7:450 p.m. We apologize for the inconvenience.


DAT (Dedicated Access Time) on Updraft moved due to Labor day holiday

Posted: September 1, 2009

The "dedicated access time" or DAT on the updraft cluster will be moved one day to accommodate the Labor Day Holiday (Monday, September 7th). The DAT, usually scheduled to begin on Monday, will be moved to begin at Noon on Tuesday, September 8th, and will run for 48 hours until Noon on Thursday, September 10th.


CHPC Presentation Series begins September 10th!

Posted: August 26, 2009

CHPC Presentation Series Begins September 10th, 2009

All Presentations begin at 1:00 p.m. on Thursdays and are located in the INSCC Auditorium.

Map to INSCC

For more details visit http://www.chpc.utah.edu/docs/presentations

Overview of CHPC
Date: September 10th, 2009
Presented by: Julia Harrison

Statistical Resources at CHPC
Date: September 17th, 2009
Presented by: Byron Davis

Introduction to Parallel Computing
Date: September 24th, 2009
by Martin Cuma

Introduction to programming with MPI
Date: October 1st, 2009
Presented by: Martin Cuma

Introduction to Programming with OpenMP
Date: October 8th, 2009
Presented by: Martin Cuma

Mathematical Libraries at CHPC
Next Scheduled: October 22nd, 2009
Presented by: Martin Cuma

Telematic Collaboration with the Access Grid
Date: October 29th, 2009
Presented by: Jimmy Miklavcic and Beth Miklavcic

NLP (Natural Language Processing) Services at CHPC
Date: November 5th, 2009
Presented by by: Sean Igo

Chemistry Packages at CHPC
Date: November 12th, 2009
Presented by: Anita Orendt

Using Gaussian03 and Gaussview
Date: November 19th, 2009
Presented by: Anita Orendt

Hybrid MPI-OpenMP Programming
Date: December 3rd, 2009
Presented by: Martin Cuma

Debugging with Totalview
Date: December 10th, 2009
Presented by: Martin Cuma

All are Welcome!


CHPC clusters back up after 7/28 outage

Posted: July 29, 2009

The cluster outage lasted from 7/28 9:17 p.m. until 7/29 about 2 p.m. All the CHPC clusters are back after last night's power outage. The startup took longer than usual because there were some additional recovery and validation steps required.
If anyone has any problems, please, let us know at issues@chpc.utah.edu


Clusters down due to power outage

Posted: July 28, 2009

INSCC building had a power glitch at about 10pm which faulted the INSCC machine room air conditioner. Servers in INSCC machine room may need to be taken offline it the room gets too hot.
Then followed a power outage at the Komas machine room at ca. 10.20pm which downed our clusters. Unless something changes, the clusters will remain down till tomorrow morning.


Retiring marchingmen cluster - July 2009

Posted: July 15, 2009

CHPC will be retiring the rest of the marchingmen cluster (mm129-mm179) over the next several days. Nodes mm001-mm128 were retired during the last downtime, June 23rd, 2009. This is to accommodate new equipment in the KOMAS data center.


Komas Machine Room Power Outage: HPC Clusters down from about Noon 7/08/09 until about Noon 7/09/09 (except updraft online 11:30 p.m. 7/09/09)

Posted: July 8, 2009

Duration: Began at 11:45 a.m. on July 8th, 2009 - Most clusters back online by Noon July 9th, 2009

Systems Affected/Downtime Timelines: All clusters, file servers and network equipment in the Komas Machine room.

All power dropped initially to Komas at approx. 11:45am. on July 8th, 2009. Most of the clusters were back online by Noon on July 8th, 2009, except for the following:

  • updraft - LOST /scratch/serial and /scratch/general, online at 11:30 p.m. on 7/09/2009
  • delicatearch lost a couple of switches, NODES da193-da214 will remain down until replacements arrive

Intel software updates

Posted: June 25, 2009

We have upgraded Intel C/C++ and Fortran compilers to version 11.1. No change on user end should be necessary as we set the new version path to be a standard. With this version, we also installed 32 bit versions of these compilers for those users whose desktops are still 32 bit. To access the 32 bit version, in your .tcshrc, put:
source /uufs/chpc.utah.edu/sys/pkg/intel/icc/std/bin/ia32/iccvars_ia32.csh
source /uufs/chpc.utah.edu/sys/pkg/intel/ifort/std/bin/ia32/ifortvars_ia32.csh

We have also installed new version of Intel Cluster Toolkit (ICT) - which we are using for MPI profiling and MPI error checking via the Intel Trace Analyzer and Collector (ITAC). ICT license also includes the Intel Math Kernel Library (MKL) and Intel MPI (necessary for the MPI error checking). The MPI is not installed yet as it had problem with a license - we'll install it once we hear back from Intel.
To source the whole Toolkit, do:
source /uufs/chpc.utah.edu/sys/pkg/intel/ict/3.2.1.015/ictvars.csh

Separate components of the Toolkit can be sourced as:
MKL:
source /uufs/chpc.utah.edu/sys/pkg/intel/mkl/10.2.0.013/tools/environment/mklvarsem64t.csh
ITAC:
source /uufs/chpc.utah.edu/sys/pkg/intel/itac/std/bin/itacvars.csh
MPI:
source /uufs/chpc.utah.edu/sys/pkg/intel/impi/std/bin/impivars.csh

Intel has an excellent documentation on their ICT website at:
http://software.intel.com/en-us/intel-cluster-toolkit/

As always, if you encounter any problems, e-mail issues@chpc.utah.edu


Post Downtime News

Posted: June 24, 2009

During the downtime on June 23rd, 2009, most CHPC clusters were updated to RedHat 4.8 - for more information visit: http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/4.8/html/Release_Notes/index.html

Also users should note that the nodes mm001-mm128 have been retired from the marchingmen cluster.


CHPC Major Downtime: Tuesday, June 23rd, 2009 - ALL DAY

Posted: June 9, 2009

Duration: From 7:00 a.m. until evening

Systems Affected/Downtime Timelines:

  • All HPC Clusters - will be taken down starting at 7 a.m. until sometime in the evening.
  • Intermittent outages of CHPC supported networks (INSCC, 7th & 8th floor of WBB and CHPC supported networks in Sutton) from 8:30 until Noon.
  • All CHPC supported desktops mounting CHPCFS file systems. From about 7 a.m. until late afternoon.
  • INSCC Machine room - no cooling from 8:30 until the repair on the faulty electrical.

    Instructions to User:

    • Expect intermittent networking outages in INSCC, WBB 7th and 8th floors, and CHPC supported networking in the Sutton building.
    • Desktops mounting the CHPCFS file systems plan for CHPCFS to be unavailable most of the day. Those with Windows and Mac desktops should be able to function, but will not have access to the CHPCFS file systems. CHPC recommends that you reboot your desktops after the downtime.
    • All HPC Clusters will be down most of the day. One maintenance task will be updating the queuing system on the clusters. We expect queued jobs to ride this out, but there is a possibility we may lose queued jobs.

    During this downtime maintenance will be performed on the electrical system in the INSCC datacenter, requiring most equipment to be shut down. Physical Maintenance will be performed in Komas. CHPC will be performing maintenance on the clusters and networking in the Komas machine room network.

    All file systems served from CHPCFS will be unavailable for a good part of the day. This includes HPC home directory space as well as departmental file systems supported by CHPC. We will work to get things online as soon as possible, hopefully by late afternoon.


  • Cluster problems this morning (4/23/09) - ALL CLEAR - clusters are still up, network functioning and the cooler back online

    Posted: April 23, 2009

    More details about this mornings connectivity to the CHPC clusters:

    One of the large coolers has failed at Komas. CHPC staff were able to make some adjustments to air flow and call in the cooler repair folks. Do to the temperatures, the main router in the middle of arches had shut itself down in the early morning, but CHPC staff rebooted it and it is back online and functioning. The cooler came back online by 11:30 a.m. and temperatures started to drop at that point. By 1 p.m. all operations were back to normal.


    CHPC PRESENTATION: Hybrid MPI-OpenMP Programming, April 23rd, 2009, 1:00PM in the INSCC Auditorium

    Posted: April 20, 2009

    Date: April 23rd, 2009
    Time: 1:00 p.m.
    Location: INSCC Auditorium
    Presenter: Martin Cuma

    In this talk we will introduce hybrid MPI-OpenMP programming model designed for distributed shared memory parallel (DSMP) computers. The CHPC clusters are representative of this family having two or more shared memory processors per node. OpenMP generally provides better performing alternative for parallelization inside a node and MPI is used for communication between the distributed processors. We will discuss cases when hybrid programming model is beneficial and provide examples of simple MPI-OpenMP codes.


    CHPC PRESENTATION: Gaussian03 & Gaussview, Thursday April 9, 2009 at 1:00PM in the INSCC Auditorium

    Posted: April 7, 2009

    This presentation will focus on the use of Gaussian03 and Gaussview on the CHPC clusters. Batch scripts and input file formats will be discussed. Parallel scaling and timings with the different scratch options (TMP, MM, DA, SERIAL, GENERAL) will also be presented, along with a discussion of scratch needs of Gaussian03. Finally several demonstrations on the use of GaussView to build molecules, input structures, set up input files and to analyze output files will be presented.


    CHPC PRESENTATION: NLP Services at CHPC, Thursday April 2nd, 2009 at 1:00PM in the INSCC Auditorium

    Posted: March 30, 2009

    Presented by Sean Igo

    This presentation is an overview of the equipment and software presently available at CHPC for Natural Language Processing (NLP). It will also cover related resources for general Artificial Intelligence use such as machine learning and data mining, and will include a brief descripton of CHPC's general resources and how to access them.


    Power Outage - Komas machine room down approx. 5:41 p.m. March 22nd, 2009 - cluster back up by 11:45 p.m.

    Posted: March 22, 2009

    There was a power outage at 585 Komas' datacenter. Unexpected Power Failure: Sunday, March 22nd, 2009. Usual job losses of jobs that were running while losing power, no other new outstanding issues.

    Systems Affected: All HPC Clusters and associated scratch file systems.

    Downtime Timelines: downtime about 6:45 - 11:45 pm.Usual job losses of jobs that were running while losing power, no other new outstanding issues.


    CHPC Major Downtime Work Complete (Telluride to be back up tomorrow after testing)

    Posted: March 17, 2009

    Our scheduled downtime is complete and all systems are up and running with the following exceptions:

    * UPDATE: Telluride (not Turretarch as previously stated) batch nodes will be up some time Wednesday after testing.
    * Marchingmen nodes mm054-mm064 are down due to power breaker issues.
    * Delicatearch nodes da001-da064 are down, awaiting replacement Cisco network switches.
    * La106 is unable to boot; our systems staff is investigating.

    We successfully updated the Torque scheduler software on Delicatearch, Marchingmen, Turretarch, and Landscapearch from 2.1 to 2.3. Because there were some major changes since the last update, it's possible that some jobs that were queued up will be lost. Please check the status of your jobs and resubmit if any have been dropped.

    --
    Walter A. Scott
    Web Application Developer
    Center for High Performance Computing
    University of Utah


    CHPC PRESENTATION: Introduction to Parallel Computing, Thursday March 12th, 2009 at 1:00PM in the INSCC Auditorium

    Posted: March 10, 2009

    Presented by: Martin Cuma

    In this talk, we first discuss various parallel architectures and note which ones are represented at the CHPC, in particular, shared and distributed memory parallel computers. A very short introduction into two programming solutions for these machines, MPI and OpenMP, will then be given followed by instructions on how to compile, run, debug and profile parallel applications on the CHPC parallel computers. Although this talk is more directed towards those starting to explore parallel programming, more experienced users can gain from the second half of the talk, that will provide details on software development tools available at the CHPC.


    CHPC Major Downtime: Tuesday, March 17th, 2009 - ALL DAY

    Posted: March 10, 2009

    Duration: From 8:00 a.m. until evening

    Systems Affected/Downtime Timelines:

    • All HPC Clusters
    • Intermittent outage of CHPC supported networks (INSCC, 7th & 8th floor of WBB and CHPC supported networks in Sutton)
    • All CHPC supported desktops mounting CHPCFS file systems.

    Instructions to User:

    • Expect intermittent networking outages in INSCC, WBB 7th and 8th floors, and CHPC supported networking in the Sutton building.
    • Desktops mounting the CHPCFS file systems plan for CHPCFS to be unavailable most of the day. Those with Windows and Mac desktops should be able to function, but will not have access to the CHPCFS file systems. CHPC recommends that you reboot your desktops after the downtime.
    • All HPC Clusters will be down most of the day. One maintenance task will be updating the queuing system on the clusters. We expect queued jobs to ride this out, but there is a possibility we may lose queued jobs.

    During this downtime maintenance will be performed on the cooling system in the Komas datacenter, requiring all clusters housed in the data center to be down most of the day. CHPC will take advantage of this down time to do a number of additional tasks, including work on the network and file servers.

    All file systems served from CHPCFS will be unavailable for a good part of the day. This includes HPC home directory space as well as departmental file systems supported by CHPC. We will work to get things online as soon as possible.


    Matlab upgraded

    Posted: March 9, 2009

    We have upgraded Matlab to version R2009a on Arches and on the CHPC maintained Linux desktops. Similarly to upgrades announced earlier today, we put the latest Matlab to /uufs/chpc.utah.edu/sys branch. Those who want to use the latest version should look into their shell init scripts (.tcshrc, .bashrc) and make sure Matlab environment is sourced from the chpc.utah.edu branch, not from the arches branch

    Among the highlights of the new version is use of multiple CPU cores for some compute intense functions and use of up to 8 threads in Parallel Computing Toolbox

    As usual, if there's any questions or problems, contact us at issues@chpc.utah.edu


    Applications updates

    Posted: March 9, 2009

    We have updated Totalview debugger and PGI compiler to their latest versions.

    For the PGI compilers, apart from added support for latest CPUs, the two highlights is improved profiler pgprof, and added provisional support for NVidia CUDA based GPUs. See the documentation for more details - https://www.pgroup.com/resources/docs.htm

    Totalview has mainly bug fixes and minor enhancements.

    Also, we are on track with moving all common applications in a single space, named /uufs/chpc.utah.edu/sys/. The aforementioned upgrades were only performed at this /uufs/chpc.utah.edu/sys/ space. In order to use these new versions, please, modify your shell init scripts (.tcshrc, .bashrc) and replace all existing references to /uufs/arches/sys/ by /uufs/chpc.utah.edu/sys/. This will ensure smooth transition once we start moving away from the /uufs/arches/sys/ space.

    If you have any questions, please, contact issues@chpc.utah.edu


    CHPC PRESENTATION: Statistical Resources at CHPC, Thursday March 5, 2009 at 1:00PM in the INSCC Auditorium

    Posted: March 2, 2009

    presented by Byron Davis. This presentation will give users and potential users of CHPC's statistical resources an overview of the equipment and software presently available.


    CHPC Presentation: Overview of CHPC, Thursday February 26th, 2009 at 1:00 p.m. in the INSCC Auditorium

    Posted: February 23, 2009

    presented by Julia D. Harrison

    This presentation gives users new to CHPC, or interested in High Performance Computing an overview of the resources available at CHPC, and the policies and procedures to access these resources.

    Topic covered will include:

    • The platforms available
    • Filesystems
    • Access
    • An overview of the batch system and policies
    • Service Unit Allocations

    Cleanup /scratch filesystems

    Posted: February 19, 2009

    A reminder to keep the /scratch filesystem cleaned up. This space is made available to support the HPC jobs running on the CHPC clusters and is not meant as a place to store data long term. Please take a moment and cleanup as much of /scratch/serial, /scratch/da, /scratch/mm on the arches clusters and /scratch/general and /scratch/uintah on the updraft cluster as possible. Keeping it cleaned up means it will be available when you need it for your jobs.

    Also, a reminder of our scratch cleanup policies: "Files on /scratch can be stored up to 30 days. All files not accessed within 30 days will be removed. Files in local, global or shared scratch space can be deleted at any time. Files on scratch file systems are not backed up."


    CHPC Spring 2009 Presentation Schedul

    Posted: February 17, 2009

    Every Spring CHPC presents an abbreviated series of our most popular talks. Here is our line up for this Spring. As always, please let us know if you have suggestions for other CHPC presentations!

    All presentations are on Thursdays at 1:00 p.m. in the INSCC Auditorium.

    • 2/26 Overview of CHPC (Julia Harrison)
    • 3/05 Statistical Resources at CHPC (Byron Davis)
    • 3/12 Introduction to Parallel Computing (Martin Cuma)
    • 3/19 **Spring Break** (No Presentation)
    • 3/26 Introduction to programming with MPI (Martin Cuma)
    • 4/02 Using Gaussian03 and Gaussview (Anita Orendt)
    • 4/09 NLP Services at CHPC (Sean Igo) NEW!
    • 4/16 Telematic Performance - InterPlay: Nel Tempo Di Sogno (Jimmy Miklavcic & Beth Miklavcic)
    • 4/23 Hybrid MPI/OpenMP Programming (Martin Cuma)

    For more information please go to CHPC Presentations.


    New Service at CHPC: Periodic Archive

    Posted: February 11, 2009

    While CHPC does NOT perform regular backups on the default HPC home directory space, we have recognized the needs of some groups to protect their data. CHPC has the ability to make periodic 'archive' backups to tape of data for research groups. These archives can be no more frequent than once per quarter. Each research group is responsible for the cost of the tapes. Future 'archive' backups can be made to the original tapes (more tapes may be needed if data set has grown), or a new tape purchase can be done with CHPC's assistance. To schedule this service, please:

    • send email to issues@chpc.utah.edu
    • purchase tapes (CHPC will assist you with tape requirements).
    • CHPC will perform the archive backup to tape.
    • tapes can be stored at CHPC or can be stored by the PI.

    CHPC suggests that group have 2 sets of tapes so that any time a full backup is being done to one set we still have the copy on the other set to protect us if the disaster were to happen mid archive.

    To see our complete disk policy, please see: http://www.chpc.utah.edu/docs/policies/disk.html


    Cambridge Structural Database (CSD) Updated

    Posted: January 8, 2009

    The new year of the Cambridge Structural Database system has been installed. Due to a change in the organization of software packages at CHPC you will have to make one change to your .tcshrc or .bashrc in order to access the new CSD (I have highlighted the only change).

    In your .tcshrc replace the line:

           source /uufs/arches/sys/pkg/CSD/std/cambridge/etc/csd.csh
    

    with

           source /uufs/chpc.utah.edu/sys/pkg/CSD/std/cambridge/etc/csd.csh
    

    If you are a bash user, in your .bashrc replace the line

     
           source /uufs/arches/sys/pkg/CSD/std/cambridge/etc/csd.bash
    

    with

           source /uufs/chpc.utah.edu/sys/pkg/CSD/std/cambridge/etc/csd.bash
    

    As always if you have any problems with the use of the CSD please contact us.