May 20, 2014 - Lonepeak Cluster Now Available for use

Posted: May 20, 2014

CHPC now has a cluster of larger memory nodes available for use by any CHPC user, named lonepeak. This cluster has a total of 16 nodes, half of which have 12 cores at 2.67GHz and 96GB memory and the remainder with 20 cores at 2.00GHz , and 256GB memory. There is a 33TB NFS mounted scratch space, /scratch/lonepeak/serial, on this cluster. Please note that this is not new hardware, but it is gear that was used for other purposes; it is still under warranty and we expect at least a couple of years of usage. Some of you have used this over the last couple of year as the turretarch or UCS nodes, where you ssh'ed into the assigned node and ran from there instead of through a batch system.

The nodes are connected via Ethernet (i.e., no Infiniband as on ember and kingspeak). Users should look for builds of applications in /uufs/; if there is any application that you need built for this cluster please send in a request to issues.

We will not be giving allocations for the cluster and there will be no preemption. Instead, all jobs will be run with the same base priority, with the addition of a fairshare policy at the group level, which will provide a small boost for groups that do not have recent usage. Reservations will be allowed for users with demonstrated need for the larger memory. These reservations can be a maximum of half of either type of node for up to two weeks.

For more details see both the user guide and the scheduling policy page.

If you have any questions about this resource, please email issues.