CHPC resources are available to qualified faculty, students (under faculty supervision), and researchers from any Utah institution of higher education. Users can request accounts for CHPC computer systems by filling out an account request form. This can be found by following the link below or by coming into Room 405, INSCC Building. (Phone 581-5253)

All ICEBox hardware is out of warranty and is currently run on the best effort basis. That is, CHPC continues running it as long as the hardware is functional and tries to repair failed nodes by salvaging parts. Because of this, Icebox is available to all CHPC users without any allocation requirements. All one needs to run is a CHPC account.

The Linux Cluster Environment can be accessed via ssh (secure shell) at the following address:

icebox.chpc.utah.edu

SSH (secure shell) is required for remote login. ssh is part of any UNIX distribution, for Windows logon use freeware putty or commercial programs like SecureCRT. For copying files to/from the outside use scp on Linux or WinSCP on Windows.

All CHPC machines mount the same user home directories. That means that user files visible at one cluster will be the same as on other clusters. While this has an obvious benefit of not having to copy files between machines, users must be aware of this fact and make sure they e.g. run correct executables for that particular cluster platform (e.g. not run Arches x86-64 executable on the Icebox).

Another complication associated with single home directory across all systems is shell initialization scripts (that run before each login and setup environment, paths, ...). Environment and especially paths to applications vary on different clusters.
CHPC created a login script that can determine what machine is being logged into, and perform machine-specific initializations. Second goal of this script is to enable users to turn on/off initialization for specific packages installed on the cluster, e.g. switch between different MPI distributions, initialize variables for usage of Totalview, Gaussian,...


Default .tcshrc login script for CHPC systems
Default .bashrc login script for CHPC systems

Note that those using tcsh shell need the .tcshrc file while those using bash will need .basrc.

The first part of each script determines what machine is being logged in based on the machine's operation system and IP address. The CHPC Linux machines address list is being retreived from the CHPC webserver upon each login and is stored in file linuxips.csh or linuxips.sh. In case webserver is down, there's about a minute long timeout, after which the script either uses the IP address file saved from previous sessions, or, if not available, issues a warning.
The script then finds IP of the machine and does host specific initialization.

Below is an example of tcsh initialization on tunnelarch. It works the same for all other Arches clusters and Icebox. bash syntax is similar but slightly different. Lines starting with # are comments. One can turn on/off specific package initializations by placing the comment at the start of line with source command. Do not comment out lines that don't start with source.

else if (($MYHOST == $TARCH)||($MYIP == $TARCHL1)||($MYIP == $TARCHL2)) then
# Commenting/uncommenting source lines below will disable/enable specified packages
# stacksize by default is very small, which causes programs with large static data to segfault
limit stacksize unlimited

setenv CLUSTER tunnelarch.arches
# default path addon
setenv PATH "/uufs/$CLUSTER/sys/bin:$PATH"
setenv MANPATH "/uufs/$CLUSTER/sys/man:$MANPATH"

#source PGI defines
source /uufs/$CLUSTER/sys/pkg/pgi/std/etc/pgi.csh

#source Pathscale defines
source /uufs/$CLUSTER/sys/pkg/pscale/std/etc/pscale.csh

#Vampirtrace and Vampir source
source /uufs/$CLUSTER/sys/pkg/ita/std/etc/ita.csh
source /uufs/$CLUSTER/sys/pkg/itc/std/etc/itc.csh

# additional user customizations below here

After the numerous host-specific initialization sections, the last section of the script does a global initialization, that is the same for each machine. Here one can for example set various command aliases, prompt format,...

In case user is mounting the CHPC home directory also on his/her own desktop (most people do), then we recommend to set variable MYMACHINE to that machine's IP address. This address can be found by issuing command hostname -i. For example:


hostname -i
123.456.78.90

we change MYMACHINE line .tcshrc to:

set MYMACHINE="123.456.78.90"

Then look for the $MYIP == $MYMACHINE line in the script, and add selected customizations there.

The batch implementation on all CHPC systems includes PBS, a resource manager, and a scheduler. The scheduler on ICE Box is the Maui scheduler.

Any process which runs for more that 15 minutes will need to be run through the batch system.

There are three steps to running in batch:

  1. Create a batch script file.
  2. Submit the script to the batch system.
  3. Check on your job.

Example Bash Script for ICE Box

Note that shell programming is exactly like running commands in the shell. You simply write into the file the commands you would like to run the same way you would write them interactively.

The following is an example script for running in PBS on ICE Box. The lines at top of the file all begin with #PBS which are seen as comments to the shell, but give options to PBS and Maui. Please see the options below for the available flags.

In this example the job would be limited to 4 CPU and 1 hour of walltime. See ICE Box cluster configuration for information on available speeds and memory. The PBS "-l" option (3rd line) tells PBS and Maui what requirements you need to run your job. You will need to ask for resources (consistent with CHPC policies) and based on what is available. Please see the "Maui on ICE Box" section below.

You will need to change "userid" to your own userid, and also the #PBS -M line to your own email address. In the cp commands also change the working_directory to your path.

Example PBS Script for ICE Box:

#PBS -S /bin/bash
#PBS -l nodes=4,walltime=1:00:00
#PBS -m abe
#PBS -M username@your.address.here
#PBS -N jobname

# Change to working directory
cd $HOME/working_directory

# Create scratch directory
mkdir -p /scratch/serial/$USER/$PBS_JOBID"

# Copy data files scratch directory
cp $HOME/working_directory/data_files /scratch/serial/$USER/$PBS_JOBID"

# Execute serial or mpi job
# This uses standard MPICH compiled with GNU/Pathscale located in
# /uufs/icebox/sys/pkg/mpich/std
# For executables compiled with PGI compilers, use /uufs/icebox/sys/pkg/mpich/std_pgi
mpirun -np 4 -machinefile $PBS_NODEFILE ./a.out > outputfile
                           
# Copy files back home and cleanup
cp * $HOME/working_directory && cd .. && rm -rf /scratch/serial/$USER/$PBS_JOBID

PBS uses your default shell so there usually isn't the need to specify which shell to use.

Note that we are copying all the input files and the executable from working_directory to /scratch/serial. This is a temporary disk space that is designed for faster I/O throughput and larger capacity. Due to these facts, we recommend using /scratch/serial rather than running executable off the home directory.

Job Submission on ICE Box

Submit your job using the "qsub" command in PBS or the "runjob" command in Maui. See the PBS commands below for additional PBS commands.

For example, to submit a script file named "pbsjob", type

qsub pbsjob

PBS sets and expects a number of variables in a PBS script. For information on these variables and necessities, enter:

man qsub

Checking On Your Job

To check if your job is queued or running, use the "showq" command in Maui.

showq

See the Maui commands below for additional Maui commands.

Maui on ICE Box

The Maui scheduler uses information from your script to schedule your job. For detailed information please see the Maui web page. This section covers information pertinent to running on ICE Box.

Although there is no allocation enforcement, there are restrictions on the maximum job wall time of 72 hours. Please see the CHPC Batch Policy for details. In some cases jobs need more than 72 hours to complete. In that case, contact CHPC and if we deem appropriate we will grant you unlimited wall time access.

The Maui scheduler has a complex set of rules that enables to prioritize jobs on many various criteria. In case of Icebox, we don't utilize almost any of these rules except for fairshare, that is users who use the cluster extensively will have slightly smaller job priority than those who don't.

Node Usage Restrictions:

There are no node usage restrictions on ICEBox except for the 72 hour wall clock limit.

Updated February 2, 2005

ICE Box Batch Node Availability

Speed
Memory PBS
Property
s2000 s1667 s1533 s1400 s1200 Total Nodes:
m3072 5 (dual)   29 (dual)     34 dual
(68 p)
m2048 8 (dual) 9 (dual) 21 (dual) 20 (dual) 5 (dual) 63 dual
(126 p)
m1024     10 (dual)     10 dual
(34 p)
Total: 13 dual
(26 p)
9 dual
(18 p)
70 dual
(140 p)
20 dual
(40 p)
5 dual
(10 p)
107
(214 procs)

Please note that this is just an approximate count and it will vary as the hardware further ages. Use command pbsnodes -a to view current detailed node availability.

ICE Box is configured to minimize the complexity of use. Users log into an interactive "front end node" and develop their code from this machine using PBS. On ICE Box there are three locations for storage. Home directory space is common to all nodes of ICE Box. Scratch space also has a common name on all nodes, however a physical scratch disk is local on each machine (/tmp). Each storage area performs differently which make it suitable for different situtations. For more on storage see below.

Using PBS we are able to manage and control the use of the system to allow for fair usage of all resources. Moreover, with PBS and MPI users do not need specific knowledge of the computer nodes.

PBS node specifications:

In your PBS script you can specify what type of node your job requires. Valid PBS node specifications are:

  • By Speed: s1200, s1333, s1533, s1667, s2000
  • By Memory: m1024, m2048, m3072
  • Other Special Requirements: myrinet - Myrinet is not supported yet

Available disk storage:

Icebox has three levels of disk storage, home directory, global scratch and local scratch.

  • Home directory: Remotely mounted disk space (over NFS) that is shared by most CHPC users. As such, it's subject to high load that limits its performance. Furthermore, CHPC is considering not mounting this space on compute nodes in the future. Consequently, we recommend this space just for file storage and not use it during job runs.
  • Global scratch: There are two levels of global scratch.
    /scratch/serial is a single NFS file server that is mounted throughout all ICEBox. It is tuned for better I/O performance, however, since it's just a serial machine it's performance is not stellar and it only has about 250 GB disk capacity.
    /scratch/parallel is a parallel file system running PVFS. Because of parallel layout, it has much better aggregate I/O performance. We are in a process of deploying it and will update here once it is operational
  • Local scratch: This is disk space located directly on each node as /tmp. It is the fastest to access, but, the data are visible only to one particular node. Also, different nodes have different disk sizes, so, one has to be careful not to exceed the node limit. Most of the nodes have about 70 GB /tmp capacity, some nodes (ib083-086, ib089, ib095-099, ib101-107) have about 35 GB. We are employing a script that cleans /tmp after each job which means that this space should be always available. However, if your job goes over wall clock time and writes to /tmp, the data are deleted so no recovery is possible.

Setting Up Your Environment for Batch

PBS will fail if you have tty-dependant commands in your .profile, .cshrc, .login or .logout. One means of preserving your login defaults and avoiding problems with PBS is by checking to see if you are in PBS before you execute a tty-dependant command. It is easy to check to see if you are in PBS by seeing if the environment variable PBS_ENVIRONMENT is set, and execute your commands appropriately.

Bourne and Korn shell users can accomplish this by modifying the .profile with the following:

if [-z "$PBS_ENVIRONMENT"] 
then
   # do interactive commands
       stty erase '^H'
       export term=xterm
else
   # do batch specific commands
endif

While csh users can us the following and their .cshrc and/or .login:

if ( $?PBS_ENVIRONMENT ) then
   # do batch specific commands
else
   # do interactive commands
       stty erase '^H'      
       set term xterm
endif

If you run csh and you have a ~/.logout script, you should place the following at the first and last of the file.

#First of csh ~/.logout
set EXITVAL = $status

#Last line of csh ~/.logout
exit $EXITVAL

PBS Batch Script Options

  • -a date_time.  Declares the time after which the job is eligible for execution. The date_time element is in the form: [[[[CC]YY]MM]DD]hhmm[.S].
  • -e path.  Defines the path to be used for the standard error stream of the batch job. The path is of the form: [hostname:]path_name.
  • -h.  Specifies that a user hold will be applied to the job at submission time.
  • -I.  Declares that the job is to be run "interactively". The job will be queued and scheduled as PBS batch job, but when executed the standard input, output, and error streams of the job will be connected through qsub to the terminal session in which qsub is running.
  • -j join.  Declares if the standard error stream of the job will be merged with the standard ouput stream. The join argument is one of the following:
    • oe-  Directs the two streams as standard output.
    • eo-  Directs the two streams as standard error.
    • n-  Any two streams will be separate(Default).
  • -l resource_list.  Defines the resources that are required by the job and establishes a limit on the amount of resources that can be consumed. Users will want to specify the walltime resource, and if they wish to run a parallel job, the ncpus resource.
  • -m mail_options.  Conditions under which the server will send a mail message about the job. The options are:
    • n: No mail ever sent
    • a (default): When the job aborts
    • b: When the job begins
    • e: When the job ends
  • -M user_list.  Declares the list of e-mail addresses to whom mail is sent. If unspecified it defaults to userid@host from where the job was submitted. You will most likely want to set this option.
  • -N name.  Declares a name for the job.
  • -o path.  Defines the path to be used for the standard output. [hostname:]path_name.
  • -q destination.  The destination is the queue.
  • -S path_list.  Declares the shell that interprets the job script. If not specified it will use the user's login shell.
  • -v variable_list.  Expands the list of environment variables which are exported to the job. The variable list is a comma-separated list of strings of the form variable or variable=value.
  • -V.  Declares that all environment variables in the qsub command's environment are to be exported to the batch job.

PBS User Commands

For any of the commands listed below you may do a "man command" for syntax and detailed information.

Frequently used PBS user commands:

  • qsub. Submits a job to the PBS queuing system. Please see qsub Options below.
  • qdel. Deletes a PBS job from the queue.
  • qstat. Shows status of PBS batch jobs.
  • xpbs. X interface for PBS users.

Less Frequently-Used PBS User Commands:

  • qalter. Modifies the attributes of a job.
  • qhold. Requests that the PBS server place a hold on a job.
  • qmove. Removes a job from the queue in which it resides and places the job in another queue.
  • qmsg. Sends a message to a PBS batch job. To send a message to a job is to write a message string into one or more of the job's output files.
  • qorder. Exchanges the order of two PBS batch jobs within a queue.
  • qrerun. Reruns a PBS batch job.
  • qrls. Releases a hold on a PBS batch job.
  • qselect. Lists the job identifier of those jobs which meet certain selection criteria.
  • qsig. Requests that a signal be sent to the session leader of a batch job.

Maui Scheduler User Commands

  • showq - displays jobs which are running, active, idling and non-queued.
  • showbf - shows backfill.
  • showstart - shows startime.

Currently, Maui Scheduler is used on ICE Box, and the Sierra cluster (rocky/sierra). These commands are located in "/uufs/icebox/sys/bin" for ICE Box, and "/uufs/sierra/sys/bin" for the Sierra cluster (rocky/sierra). Please see the Maui Scheduler documentation for more information.

C/C++

Updated: February 2, 2005

The Icebox cluster offers several compilers. The GNU Compiler Suite includes ANSI C, C++ and Fortran 77 compilers. The current version is 3.4.2, that is shipped with RedHat EL 3.0 that is run on the system.

In addition to GNU compilers, we offer two commercial compiler suites. The Pathscale compilers generally provide superior performance. They include C, C++ and Fortran 77/90/95. An advantage is full interoperability with GNU compilers, including g77. This compiler currently only supports OpenMP for Fortran, the company targets C OpenMP for the next release by mid 2005. Pathscale was originally targeted for the Opteron platform but it now supports IA-32 as well. However, we have found some problems with it with larger codes, so, please, exercise caution when using it. Also, seems like the performance of the compiler on IA32 is not as good as that of PGI.

The Portland Group Compiler Suite is our choice of compiler distribution on Icebox as it's stable and includes Athlon optimization features. It should also interoperate with GNU, though we have seen problems with execution of some Fortran codes that were linking g77 compiled libraries. As a more mature product than Pathscale, PGI compilers support OpenMP and ship with more tools.

GNU compilers

The GNU distribution is located in the default area, that is, compilers in /usr/bin, libraries in /usr/lib or /usr/lib64, header files in /usr/include,.... The user should not need to do anything else than to invoke the compiler by its name, e.g.:

gcc source.c -o executable

Pathscale compilers

The latest version of Pathscale compilers are located at /uufs/icebox/sys/pkg/pscale/std.

To find the compiler version, use flag --version, i.e. pathcc --version.

In order to use the compiler, users have to source shell script that defines paths and some other environment variables.

  • source /uufs/icebox/sys/pkg/pscale/std/etc/pscale.csh  (for csh/tcsh)
  • source /uufs/icebox/sys/pkg/pscale/std/etc/pscale.sh  (for ksh/bash)

The compilers are invoked as pathcc, pathCC and pathf90 for C, C++ and F90, respecti vely. For list of available flags, use the man pages (e.g. man pathcc).

Aggressive optimization is achieved with -O3 -OPT:Ofast. Further performance gain can be achieved with using interprocedural analysis, invoked with -ipa flag, however, there are some limitations with the usage. Contact CHPC staff if you run into problems.

For more information on the compiler, visit Pathscale website.

A user's guide that contains useful discussion on optimization flags can be found here.

PGI compilers

The latest version of Portland Group compilers are located at /uufs/icebox/sys/pkg/pgi/std.

To find the compiler version, use flag -V, i.e. pgcc -V.

In order to use the compiler, users have to source shell script that defines paths and some other environment variables.

  • source /uufs/icebox/sys/pkg/pgi/std/etc/pgi.csh  (for csh/tcsh)
  • source /uufs/icebox/sys/pkg/pgi/std/etc/pgi.sh  (for ksh/bash)

The compilers are invoked as pgcc, pgCC, pgf77 and pgf90 for C, C++, F77 and F90, respectively. For list of available flags, use the man pages (e.g. man pgcc).

We generally recommend flag -fastsse for good performance.

For more information on the compiler, visit Portland Group website.

Documentation including user's guide, language reference, etc. can be found here.

Fortran

Updated: February 2, 2004

The Icebox cluster offers several compilers. The GNU Compiler Suite includes ANSI C, C++ and Fortran 77 compilers. The current version is 3.4.2, that is shipped with RedHat EL 3.0 that is run on the system.

In addition to GNU compilers, we offer two commercial compiler suites. The Pathscale compilers generally provide superior performance. They include C, C++ and Fortran 77/90/95. An advantage is full interoperability with GNU compilers, including g77. This compiler currently only supports OpenMP for Fortran, the company targets C OpenMP for the next release by mid 2005. Pathscale was originally targeted for the Opteron platform but it now supports IA-32 as well. However, we have found some problems with it with larger codes, so, please, exercise caution when using it. Also, seems like the performance of the compiler on IA32 is not as good as that of PGI.

The Portland Group Compiler Suite is our choice of compiler distribution on Icebox as it's stable and includes Athlon optimization features. It should also interoperate with GNU, though we have seen problems with execution of some Fortran codes that were linking g77 compiled libraries. As a more mature product than Pathscale, PGI compilers support OpenMP and ship with more tools.

GNU compilers

The GNU distribution is located in the default area, that is, compilers in /usr/bin, libraries in /usr/lib or /usr/lib64, header files in /usr/include,.... The user should not need to do anything else than to invoke the compiler by its name, e.g.:

g77 source.c -o executable

Pathscale compilers

The latest version of Pathscale compilers are located at /uufs/icebox/sys/pkg/pscale/std.

In order to use the compiler, users have to source shell script that defines paths and some other environment variables.

  • source /uufs/icebox/sys/pkg/pscale/std/etc/pscale.csh  (for csh/tcsh)
  • source /uufs/icebox/sys/pkg/pscale/std/etc/pscale.sh  (for ksh/bash)

The compilers are invoked as pathcc, pathCC and pathf90 for C, C++ and F90, respecti vely. For list of available flags, use the man pages (e.g. man pathf90).

Aggressive optimization is achieved with -O3 -OPT:Ofast. Further performance gain can be achieved with using interprocedural analysis, invoked with -ipa flag, however, there are some limitations with the usage. Contact CHPC staff if you run into problems.

Also note that since the compiler is relatively new, it does not implement all of Fortran intrinsic functions. Most of the common functions are implemented, though.

For more information on the compiler, visit Pathscale website.

A user's guide that contains useful discussion on optimization flags can be found here.

Portland Group compilers

The latest version of Portland Group compilers are located at /uufs/icebox/sys/pkg/pgi/std.

In order to use the compiler, users have to source shell script that defines paths and some other environment variables.

  • source /uufs/icebox/sys/pkg/pgi/std/etc/pgi.csh  (for csh/tcsh)
  • source /uufs/icebox/sys/pkg/pgi/std/etc/pgi.sh  (for ksh/bash)

The compilers are invoked as pgcc, pgCC, pgf77 and pgf90 for C, C++, F77 and F90, respectively. For list of available flags, use the man pages (e.g. man pgf90).

We generally recommend flag -fastsse for good performance.

For more information on the compiler, visit Portland Group website.

Documentation including user's guide, language reference, etc. can be found here.

The Portland Group includes a debugger, pgdbg, along with their compiler suite.

Totalview, a de-facto industry standard debugger supports both serial and parallel debugging. For details on how to use Totalview, refer to CHPC's Totalview page.

For serial profiling, there is GNU gprof and Portland Group pgprof. For parallel profiling, there are MPICH bundled upshot and jumpshot and, the recommended commercial product, Intel Cluster Tools, available on Delicatearch. For details on how to use Intel Cluster Tools , refer to CHPC's Inter Cluster Tools webpage.

Large files support (> 2 GB)

Both GNU and PGI compilers support files larger than 2 GB, however, the user must specify that he wishes to do so.

For the GNU compilers, include two compile flags, "-D_FILE_OFFSET_BITS=64" and "-D_LARGEFILE_SOURCE". The compilation line for gcc would thus look like this:

gcc -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE file.c -o executable.exe

For more information, visit http://www.suse.de/~aj/linux_lfs.html.

Both Fortran and C PGI compilers support large files, however, the executable must be linked against a different set of standard libraries, that are located in "$PGI/linux86/liblf." To compile e.g. a Fortran 77 program:

pgf77 -Mlfs source.f -o executable.exe

Using -Mlfs flag, the linker will search the liblf directory before the general lib directory thus linking the correct libraries.

As ICE Box is a distributed memory parallel system, a message passing is the way to communicate between the processes in the parallel program. Message Passing Interface (MPI) is the prevalent communication system and several versions of MPI are installed on ICE Box. For instructions how to use MPI refer to CHPC's MPI webpage and Introduction to programming with MPI tutorial presentation.

Most of the newer nodes on ICE Box are dual processor, which means that shared memory programming can be used on these nodes to save some time on the MPI message overhead. OpenMP is emerging to be the major industry standard for shared memory programming, and is supported by the Portland Group compilers with command line "flag -mp." More information on OpenMP can be found in Introduction to programming with OpenMP tutorial presentation.

Parallel debugging is supported by the Totalview Debugger. For details on how to use Totalview, refer to CHPC's Totalview page.

Profiler capable of timing MPI communication functions (which are often a bottleneck in a parallel program) is Vampir. We will most likely obtain it. For details on how to use Vampir, refer to CHPC's Vampir/Vampirtrace Profiler webpage.

NFS Issues

To achive optimal performance on the cluster a user needs to be mindful of a few things. First, the NFS server that provides home directory space can be oversaturated by excessive use. This will result in slower file access and a slower run time. The local scratch on each node should be utilized to reduce the need to access home directory space. NFS reads are much more efficient then NFS writes. Based on this fact, reading from home directory space and writing to local scratch produces a good base design. Once a job is completed, gather what is stored in local scratch back to the user's home directory. This procedure has one other advantage. The ethernet connection is not only used for NFS but also for MPI communications. If this network connection is flooded with too many NFS calls this will greatly reduce the efficiency of a parallel job. Therefore, unless it is necessary, keep the traffic off the network. This will allow for the maximum network resources to be left for MPI work.

TCP/IP Issues

In the current implementation, MPI calls are sent over TCP/IP. The large overhead of TCP/IP slows down MPI work. Moreover, its slow startup nature prevents low latency and high bandwidth for small MPI messages, ie, messages in the range of 256 bytes or less. To work around this problem, it is recommended to consolidate MPI buffers into fewer larger buffers to avoid sending many small buffers. Again, this is a temporary state as software and hardware are advancing and better solutions are near at hand.

Linear algebra subroutines

There are several different BLAS library versions on ICE Box, which are summarized below:

Portland Group BLAS

Portland Group ships its own version of BLAS library with its compilers. This is the BLAS that will get linked to your source when you use PG compilers with -lblas option. We discourage using it since it is optimized only for the general i386 processors and does not take advantage of the Pentium II or Athlon caching and vectorizing features. The libraries are located at $PGI/linux86/lib.

Compilation instructions (for reference only):

Fortran

pgf90 (or pgf77) source_name.f -o executable_name -lblas

C/C++

pgcc (pgCC) source_name.c -o executable_name -lblas

ATLAS

Automatically Tuned Linear Algebra Software (ATLAS) is an open source library aimed at providing portable performance solution. It provides full BLAS and certain LAPACK routines, which are being tuned to the computer platform at the compilation time. This is the BLAS-compatible library that we recommend using. We offer pre-compiled version of ATLAS optimized for Pentium III class of processors, which should run on both the interactive and compute nodes. The libraries are located at /uufs/icebox/sys/pkg/atlas/std/lib.

Current version is 3.6.0.

Compilation instructions:

Fortran

g77:

g77 source_name.f -o executable_name -L/uufs/icebox/sys/pkg/atlas/std/lib -lf77blas -latlas

PGI compilers:

pgf90 (or pgf77) source_name.f -o executable_name -L/uufs/icebox/sys/pkg/atlas/std/lib -lpgf90blas -latlas

Pathscale compilers:

pathf90 -m32 source_name.f -o executable_name -L/uufs/icebox/sys/pkg/atlas/std/lib -lpathf90blas -latlas

C/C++

Any compiler

pgcc (or pgCC, g++, pathcc, pathCC) source_name.f -o executable_name -L/uufs/icebox/sys/pkg/atlas/std/lib -latlas -lcblas

AMD Core Math Library (ACML)

Since 2003, AMD is actively developing mathematical library targeted for its AthlonXP and Opteron processors. As of this writing, it includes full optimized BLAS, LAPACK and fast Fourier transform routines. For more information, consult AMD ACML webpage. From our tests and AMD's presentation, BLAS speed is almost as good as that of ATLAS, the FFT routines are several procent slower than FFTW, but keep improving with new releases. Latest version 2.5 available is located at /uufs/icebox/sys/pkg/acml/std. Note that there are seven different directories here. Those starting with gnu are designed for use with GNU compilers, those with pgi for PGI compilers. The different subdirectories denote distribution for Athlon XP (gnu32, with SSE2 instructions), Athlon T-bird (gnu32_nosse2, with just SSE instruction set) and Athlon Classic (gnu32_nosse, no SSE instruction set). As all current Icebox compute nodes consist of AthlonXP CPUs, we recommend using the first library set (gnu32). However, note, that this code will not run on the interactive nodes, which are Athlon Classics (i.e. no SSE or SSE2). Distribution for Pathscale compilers is located in /uufs/icebox/sys/pkg/acml/stdi_pscale and is compiled with the SSE2 instruction set.

Compilation instructions:

Fortran

GNU Fortran:

g77 source_name.f -o executable_name -L/uufs/icebox/sys/pkg/acml/std/gnu32/lib -lacml

PGI Fortran:

pgf90 (or pgf77) source_name.f -o executable_name -L/uufs/icebox/sys/pkg/acml/std/pgi32/li b -lacml

Pathscale Fortran:

pathf90 source_name.f -o executable_name -L/uufs/icebox/sys/pkg/acml/std_pscale/lib -lacml -lg2c

C/C++

gnu C/C++:

gcc (or g++) source_name.f -o executable_name -L/uufs/icebox/sys/pkg/acml/std/gnu32/lib -l acml

PGI C/C++:

pgcc (or pgCC) source_name.f -o executable_name -L/uufs/icebox/sys/pkg/atlas/std/pgi32/lib -lacml

Pathscale C/C++:

pathcc (or pathCC) source_name.f -o executable_name -L/uufs/icebox/sys/pkg/atlas/std/gnu32 /lib -lacml