You are here:

Building Singularity containers locally

Any user can deploy Singularity container at CHPC Linux machines provided that they build it themselves on a machine where they have administrator priviledges. Below we detail how this can be done.

Preparing your computer

Singularity runs natively on Linux, if you have a root on your Linux machine, you can install Singularity.

On a Windows machine, we recommend to install a Linux Virtual Machine, in which we can then install Singularity for Linux. This is detailed below.

Similarly, on MacOS, Singularity website  provides information how to install a Linux Virtual Machine using Homebrew.

Installing Linux VM on a Windows machine

In order to make the Linux VM simple, we will be using a few command line tools. Namely GIT Bash, Virtual Box and Vagrant.

The whole process is nicely described here, but, we'll outline it below as well with a few specifics. So, follow our recipes and go to the other link for details.

1. Download and install the prerequisites.

Namely, GIT Bash, which is a part of Git for Windows , Virtual Box and Vagrant. All installs are GUI based and quite straightforward.

Once installed, right click on the desktop and you should see Git Bash here ..., click on that and a command shell window opens. This is a bash shell (with MINGW Linux running underneath). We will be using it to do the rest of the VM work.

2. Set up Linux Ubuntu Virtual Machine

We can do this as any Windows user in Git Bash, but, if you want to sync a Windows folder to your Linux VM, you need to start the Git Bash as an administrator. Go to the search bar at the bottom left WIndows pane, type Git Bash, right click it and choose "Run as Administrator". Then run the first few commands (the $ means command prompt):

$ pwd

Notice that we are in your User's directory, on drive C:. It is preferable to organize files and directories in certain fashion, so, we want to put all our container files to some different directory:

$ mkdir -p /c/work/containers/vagrant_ubuntu16
$ cd /c/work/containers/vagrant_ubuntu16

Now we are ready to prepare the VM. First we add, initialize and download the base image:

$ vagrant box add ubuntu/xenial64
$ vagrant init ubuntu/xenial64

This prepares a file called Vagrantfile, which defines the Ubuntu VM setup.

If we want to sync folders with our Windows host, edit Vagrantfile and add the following to it, right before the last line which says end:

  config.vm.synced_folder "vmfiles", "/vagrant", type: "smb"

This says Vagrant to make directory /vagrantfile in the Ubuntu VM visible as vmfiles directory in the current directory (/c/work/containers/vagrant_ubuntu14/vmfiles) on your Windows host. We also need to create this directory:

$ mkdir vmfiles

We also need to make sure that the Virtual Box Guest Additions are installed and up to date (guest additions are extra packages for the VM guest - Ubuntu in our case). This can be done by installing a Vagrant plugin:

$ vagrant plugin install vagrant-vbguest

Now we are ready to start the Ubuntu VM:

$ vagrant up

If you set up the folder sync, you will be prompted for your Windows username and password.

If your Guest Additions are not present or out of sync with the Virtual Box, you'll see a lot of messages but the VM will eventually start up.

Now we can ssh to the VM:

$ vagrant ssh
$ whoami

Before we can install singularity, we need to add a few depenencies. So, we use sudo command ("ubuntu" user has a full sudo access) to install them. Building certain Linux containers also requires their specific prerequisites (debootstrap for Debian, yum for RHEL/CentOS/Fedora). We may as well also run OS update:

$ sudo apt-get update
$ sudo apt-get install -y build-essential git autoconf libtool
$ sudo apt-get install -y debootstrap yum

Note that the VM is running in the background (started with vagrant up ), and it will stay in the background when you disconect (exit) from its ssh session or close the Git Bash window. To terminate the VM, call vagrant halt.

Installing Singularity

At this point, no matter what your host is (Linux, Windows, Mac), we are running a Linux, either natively or in a VM. Now we install Singularity by first cloning its GIT repository and then configuring and making it using the Linux autoconf tools.

$ git clone
$ cd singularity
$ ./
$ ./configure --prefix=/usr/local
$ make
$ sudo make install

That should be it, you should be able to call Singularity command either as an user (singularity ) or as a root (sudo singularity).

Building a Singularity container

Singularity website offers a host of tutorials on how to use Singularity and how to build containers. We will here focus on building an Ubuntu Linux based container, because in our experience most of harder-to-install packages are developed on Ubuntu and thus the easiest to deploy on Ubuntu.

There are two basic components of building a Singularity container, the first is to run the singularity commands to provision the container and to bootstrap it (install the OS and required programs). The second is to provide an information what OS to install and how. This can be done either through a definition file, or through importing other container definition, such as Docker.

In our simple demonstration here, we will build the container based on a bootstrap definition file. The container we will build should serve as a good base for Ubuntu based application deployment. For more details on how to include more complicated pieces such as GPU and MPI support or specific Python based programs, and for some tips and tricks, see the CHPC Singularity containers github page.

One important detail on deploying locally built container at CHPC machines is to create mount points to CHPC file spaces, so that we can use our home directory and scratch file systems in the container. To do that, put the following at the end of the %post section of the bootstrap definiton file:

mkdir /uufs /scratch

The two singularity  commands that we need to run are:

sudo singularity create --size 2048 ubuntu16.img

this creates a bare container with size 2 GB locatd in a file called ubuntu16.img.

 The second command bootstraps the container:

sudo singularity bootstrap ubuntu16.img ubuntu16.def

There will be a lot of output as the OS and programs are being installed. Once that is done, the container is ready.

To check that the container is working we can shell into it:

singularity shell -s /bin/bash ubuntu16.img

If one needs to install additional packages, shell to the container as sudo with write permissions (-w) on the container:

sudo singularity shell -w -s /bin/bash ubuntu16.img

After interactively installing packages, copy/paste the installation commands to the %post section of the definition file (ubuntu16.def in this case), so that they can be automatically included in the future container build. In general, when installing new application in a container, we will iterate between running the install commands interactively and putting them to the bootstrap definition file, until the automatic build works correctly.

Deploying Singularity container at CHPC clusters

Once we have a container built to our liking, we need to move it to the CHPC systems. One can use their favorite scp program (WinSCP, CyberDuck) - if the folder sync between the host and VM guest were set up, or scp the container directly from the Linux VM (or native Linux machine) you are running:

scp ubuntu16.img

To test the container, we ssh to a CHPC machine and run:

ml singularity
singularity shell ~/my_containers/ubuntu16.img


Last Updated: 1/29/19