Our current intent is to provide containers for applications that are difficult to build natively on CentOS7 that our clusters run. Most of these applications are being developed on Debian based Linux systems (Ubuntu and Debian) and rely on their software stack.
Running a CHPC provided container is as simple as running the application command itself. We provide an environment module that sets up alias for this command that calls the container behind the scenes. If the container provides more commands, we provide a command to start a shell in the container, from which the user can call the commands needed to execute their processing pipeline.
In the containers, the user can access storage in their home directories, or in the scratch file servers.
Currently we provide the following containers:
Tensorflow GPU container environment can be loaded with
module load tensorflow/1.0.1.gpu. A SLURM script that deploys a Tensorflow calculation to cluster GPU nodes is at
/uufs/chpc.utah.edu/sys/srcdir/tensorflow/1.0.1-cp27/run_tensorflow_gpu.slr. Make sure you run this only on the GPU nodes.
SEQLinkage container wraps the
seqlink command. To use it interactively (for testing only), simply
module load SEQLlinkage and then run
seqlink with the appropriate parameters. To run a batch job, use
/uufs/chpc.utah.edu/sys/srcdir/SEQLinkage/1.0.0/run_seqlink.slr as a start to make your on batch script. Don't forget to use the parallel option,
-j XX , in order to use all CPU cores on a compute node, XX being number of cores on the
bioBakery is a set of tools, one can either use them separately or in a pipeline.
After loading the module file,
module load bioBakery, we have created a shortcut to run a single command called
runbioBakery humann2 parameters. To start a shell in the container and run multiple commands in the shell, run
One can build singularity container on their own machine and scp it to CHPC's systems. Singularity runs on Linux, on MacOS or Windows one can create a Linux VM using e.g. VirtualBox and install Singularity in it. For details on how this can be done, see our Building Singularity containers locally page.
For security reasons we don't allow building containers on CHPC systems, as building a container requires sudo access.
Singularity only supports Linux containers, so, we do not support users importing Windows or MacOS containers.
To ensure portability of your Singularity container to CHPC systems, during the build process create mount points for CHPC file systems
mkdir /uufs /scratch
Then scp the container to a CHPC file server, e.g. your home directory, and run it e.g. as:
module load singularity
singularity shell my_container.img
Or, if you have defined the
%runscript section in your container, then simply execute it, or use
singularity run :
singularity run my_containers.img
Note that in our singularity module, we define two environment variables:
- SINGULARITY_SHELL=/bin/bash - this sets the container shell to bash (easier to use than default sh)
- SINGULARITY_BINDPATH=/scratch,/uufs/chpc.utah.edu - this binds mount points to all the /scratch file servers and to /uufs file servers (sys branch, group spaces).
If you prefer to use a different shell, or not bind the file servers, set these variables differently or unset them.
There are several options of using Docker image through Singularity. One can either convert the Docker image to Singularity through a bootstrap method. Or, one can run the Docker image directly via Singularity.
Convert docker image to Singularity
This approach is useful if one has a root or sudo to singularity, so, applicable to own machine but not to users on CHPC clusters. The process is described in Singularity and Docker page. For parctical purposes one would usually start with a defined Docker image and then add their own application set up process to it, e.g. how we do it with Tensorflow installation.
Running Docker image directly in Singularity
To run Docker image in Singularity, simply point the image to the Docker image, e.g
singularity shell docker://ubuntu:latest
One caveat with this approach is to make sure that the mount points to CHPC file systems are located in the Docker image, that is, when creating the Docker image, one needs to
mkdir /uufs /scratch