R update on Linux machines
Posted: February 3, 2015
We have built a new distribution of R that takes advantage of multithreading when processing numerical arrays and made it the default R to use on CHPC's Linux systems including the clusters. The multithreaded build will run in parallel on a single compute node and standard R linear algebra benchmark achieves 5-10 speedup on kingspeak as compared to the single threaded build. To use multiple nodes, a distributed parallel package such as Rmpi or snow is still necessary.
We have installed external packages that have been built for the older versions but if there is an additional package you need, please, let us know.
For details how to use R at CHPC, see the documentation at
Posted: January 23, 2015
We have updated the default IDL distribution to version 8.4. Please, let us know if you notice any problems with it.
Also, there are some users that use older version that is hardcoded to their environment. If you'd like to use the default (latest) version, either edit your shell init scripts or get in touch with us for help.
Jan 22: FastX server upgrade has been completed
Posted: January 22, 2015
The FastX server upgrade has been completed. Please remember to upgrade your FastX clients to the latest version (version 39), and if you have any problems send in a report to issues.
Supercomputing in Plain English - Video Conference - University of Oklahoma
Posted: January 16, 2015
Event date: January 20, 2015
Duration: 1 hour, but plan for it to go over by a bit
FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE FREE
Video Conference feed also hosted at CHPC
Please send email to email@example.com if you plan to attend so we can have a headcount. However, if you have already notified us at firstname.lastname@example.org, no need to send another request.
INSCC Conference Room 345
Supercomputing in Plain English (SiPE), Spring 2015 Available live in person and live via videoconferencing
Please feel free to share this with anyone who may be interested and appropriate.
IF YOU'VE ALREADY REGISTERED, please forward or ignore -- there's no need to re-register.
Tuesdays starting Jan 20 2015, 1:30pm Central Time (3:30pm Atlantic, 2:30pm Eastern, 12:30pm Mountain, 11:30am Pacific, 9:30am Hawai'i-Aleutian)
Sessions are 1 hour, but please budget 1 1/4 hours, in case we run long or there are lots of questions.
Live in person: Stephenson Research & Technology Center boardroom, University of Oklahoma Norman campus
Live via videoconferencing: details to be announced
(You only need to register ONCE for the whole semester, not for every week.)
We already have over 250 registrations!
So far, the SiPE workshops have reached over 1500 people at 248 institutions, agencies, companies and organizations in 47 US states and territories and 10 other countries:
* 178 academic institutions; * 29 government agencies; * 26 private companies; * 15 not-for-profit organizations.
SiPE is targeted at an audience of not only computer scientists but especially scientists and engineers, including a mixture of undergraduates, graduate students, faculty and staff.
These workshops focus on fundamental issues of High Performance Computing (HPC) as they relate to Computational and Data-enabled Science & Engineering (CDS&E), including:
- overview of HPC;
- the storage hierarchy;
- instruction-level parallelism;
- high performance compilers;
- shared memory parallelism (e.g., OpenMP);
- distributed parallelism (e.g., MPI);
- HPC application types and parallel paradigms;
- multicore optimization;
- high throughput computing;
- accelerator computing (e.g., GPUs);
- scientific and I/O libraries;
- scientific visualization.
The key philosophy of the SiPE workshops is that an HPC-based code should be maintainable, extensible and, most especially, portable across platforms, and should be sufficiently flexible that it can adapt to, and adopt, emerging HPC paradigms.
1 semester of programming experience and/or coursework in any of Fortan, C, C++ or Java, recently
January 22 at 4pm -- FastX server upgrade
Posted: January 9, 2015
Duration: about an hour
On Thursday January 22 at 4pm we will be upgrading the server side of FastX on the cluster interactive nodes (includes the meteoXX and atmosXX nodes) and the frisco nodes. This process will not take long (about an hour), but there is a chance that existing FastX sessions will not be accessible after the upgrade so we strongly suggest that all FastX sessions are shut down before this time. If this is not possible, at a minimum users should save the results of any processes running within a FastX session.
After the server side upgrade all users will also have to upgrade their FastX clients.
The reason we are doing this upgrade at this time instead of waiting for the next downtime is that we have been notified that we can expect up to a 4-6X speed gain for remote visualization operations using this new server version (version 38) with the latest client version (version 39), based on information provided by the company which produces the FastX product. To see the other fixes and improvements included in the release as well visit the Starnet site .
As always, if you have any questions or concerns with these plans, please contact us.
Spring 2015 CHPC Presentation Schedule
Posted: January 6, 2015
Spring 2015 CHPC Presentation Schedule
All presentations are 1-2pm INSCC auditorium unless noted otherwise
* These classes are 1-3pm in INSCC Auditorium
** BMI Classroom (421 Wakara Way Room 1470) at 2-3pm
*** XSEDE Workshop schedules will be posted when available; these typically run from about 9am to 3 pm with an hour break along the way; users will need to register at XSEDE site.
Date Presentation Title Presenter
Friday January 9th XSEDE HPC Monthly Workshop: OpenMP*** XSEDE
Tuesday January 13th Overview of CHPC Anita Orendt
Tuesday January 20th Protected Environment at CHPC** Sean Igo
Tuesday January 27th Introduction to Linux, part 1* Anita Orendt and Albert Lund
Tuesday February 3rd Introduction to Linux, part 2* Albert Lund and Anita Orendt
Friday February 6 XSEDE HPC Monthly Workshop: OpenACC*** XSEDE
Tuesday February 10th Introduction to Linux, part 3* Albert Lund and Anita Orendt
Tuesday February 10th NLP and AI Services at CHPC** Sean Igo
Tuesday February 17th Introduction to Parallel Computing Martin Cûma
Tuesday February 24th Hands-on Introduction to Python, Part 1* Wim Cardoen and Walter Scott
Tuesday March 3rd Hands-on Introduction to Python, Part 2* Wim Cardoen and Walter Scott
Weds-Th March 4-5 XSEDE HPC Monthly Workshop:MPI*** XSEDE
Tuesday March 10th Hands-on Introduction to Numpy/Scipy* Wim Cardoen and Walter Scott
SPRING BREAK MARCH 15-22
Tuesday March 24th Intel Software Development Tools Martin Cûma
CHPC offering OpenMP XSEDE HPC Workshop on Jan 9, 2015
Posted: December 17, 2014
Duration: Jan 9, 2015
XSEDE HPC Workshop: Open MP
January 9, 2015
XSEDE along with the Pittsburgh Supercomputing Center and the National Center for Supercomputing Applications at the University of Illinois are pleased to announce a one day OpenMP workshop. This workshop is intended to give C and Fortran programmers a hands-on introduction to OpenMP programming. Attendees will leave with a working knowledge of how to write scalable codes using OpenMP.
CHPC will be one of the satellite sites that will offer this class. There is no cost to attend, but you must register with XSEDE at their site before 3pm on Jan 8, 2015.
The agenda for the Workshop can be found at the registration link.
Dec 11, 2014: CHPC Fall 2014 Newsletter now online
Posted: December 11, 2014
The Fall 2014 CHPC newsletter is now available on the CHPC website at this link.
Dec 10, 2014: Merge of Telluride nodes into Lonepeak is Complete
Posted: December 10, 2014
The merge of the telluride nodes into lonepeak is now complete. Lonepeak now has a total of 100 compute nodes – 16 general and 84 owned by the Cheatham group. All nodes are running RedHat Enterprise Linux 6.
More information about the expanded lonepeak cluster and the nodes added can be found on our wiki at:
Some notes on usage:
- Lonepeak now has general and owner resources in a similar fashion to our other clusters
- There have been no changes on the general nodes. These nodes are still unallocated, so all users can run on these by using their normal account as was done before the merge, and there is no preemption on these nodes.
- The owner nodes of lonepeak can be accessed by all users outside of the owner group by using the owner-guest account (#PBS –A owner-guest). Jobs run using this account are preemptable by jobs submitted by the owner group.
- While the general nodes have 10Gbit/s ethernet interconnect for multiple node jobs, the owner nodes have a Qlogic Infiniband interconnect (SDR). Intel MPI and specific builds of MPICH2 can be used to have a single executable that can run on multiple network targets. For details, see the documentation on this topic. For additional details see the section “Important Differences between General and Owner node Section of the Lonepeak User’s Guide.
- For Gaussian users – if you run as owner guest you need to use the legacy Intel build of gaussian09 that can is accessed by pointing g09root to /uufs/chpc.utah.edu/sys/pkg/gaussian09/EM64TL. This change has been added to the sample gaussian batch script linked on the gaussian user guide page
If you run into any applications that do not run on these owner nodes, or if you have questions or problems, please open up an issue report.
After the first of the new year, we will be adding slurm for batch scheduling on this cluster. We are using this cluster as a test bed to explore replacing our current Moab and Torque as our batch scheduling and resource manager system with the open source slurm product. More information will follow as we move forward on this project.