Singularity is a containerization solution designed for high-performance computing cluster environments. It allows a user on an HPC resource to run an application using a different operating system than the one provided by the cluster. For example, the application may require Ubuntu but the cluster OS is CentOS. Conceptually, it is similar to other container software such as Docker, but is designed with several important differences that make it more suited for HPC environments.
Singularity can run images from a variety of sources, including both a flat image file or a Docker image from Docker Hub.
The following table lists the currently available images and the command to run the software.
|Software||Version||Command to Run||Additional Notes|
||Ubuntu 16.04.1 LTS w/CUDA Toolkit|
|TensorFlow GPU||1.4, 1.12||
|Keras w/Tensorflow GPU backend||2.0.4, 2.1.5, 2.2.4||
|Sonnet GPU||1.13, 1.27||
|ENet Caffe GPU||427a014||
If you would like to request an image to be added, please fill out the HCC Software Request Form and indicate you would like to use Singularity.
To use Singularity on HCC machines, first load the
Singularity provides a few different ways to access the container.
Most common is to use the
exec command to run a specific command
within the container; alternatively, the
shell command is used to
launch a bash shell and work interactively. Both commands take the
source of the image to run as the first argument. The
takes an additional argument for the command within the container to
Finally, pass any arguments for the program itself in the same manner as you would if running it directly.
For example, the Spades Assembler software is run using the Docker
unlhcc/spades and via the command
To run the software using Singularity, the commands are:
module load singularity singularity exec docker://unlhcc/spades spades.py <spades arguments>
Using Singularity in a SLURM job is similar to how you would use any other software within a job. Load the module, then execute your image:
#!/bin/sh #SBATCH --time=03:15:00 # Run time in hh:mm:ss #SBATCH --mem-per-cpu=4096 # Maximum memory required per CPU (in megabytes) #SBATCH --job-name=singularity-test #SBATCH --error=/work/[groupname]/[username]/job.%J.err #SBATCH --output=/work/[groupname]/[username]/job.%J.out module load singularity singularity exec docker://unlhcc/spades spades.py <spades arguments>
Custom images can be created locally on your personal machine and added to Docker Hub for use on HCC clusters. More information on creating custom Docker images can be found in the Docker documentation.
You can create custom Docker image and use it with Singularity on our clusters. Singularity can run images directly from Docker Hub, so you don’t need to upload anything to HCC. For this purpose, you just need to have a Docker Hub account and upload your image there. Then, if you want to run the command “mycommand“ from the image “myimage”, type:
module load singularity singularity exec docker://myaccount/myimage mycommand
where “myaccount” is your Docker Hub account.
In case you see the error
ERROR MANIFEST_INVALID: manifest invalid
when running the command above, try:
module load singularity unset REGISTRY singularity exec docker://myaccount/myimage mycommand
If you get the error
FATAL: kernel too old when using your Singularity image on the HCC clusters, that means the glibc version in your image is too new for the kernel on the cluster. One way to solve this is to use lower version of your base image (for example, if you have used Ubuntu:18.04 please use Ubuntu:16.04 instead).
All the Dockerfiles of the images we host on HCC are publicly available here. You can use them as an example when creating your own image.
Alternatively, instead of building an image from scratch, you can start with an HCC-provided
image as the base for your Dockerfile (i.e.
and add any additional packages you desire.
Unfortunately it’s not possible to create one image that has every
available Python package installed for logistical reasons. Images are
created with a small set of the most commonly-used scientific packages,
but you may need others. If so, you can install them in a location in
$WORK directory and set the
PYTHONPATH variable to that
location in your submit script. The extra packages will then be “seen”
by the Python interpreter within the image. To ensure the packages will
work, the install must be done from within the container via
singularity shell command. For example, suppose you are using
tensorflow-gpu image and need the packages
First, run an interactive SLURM job to get a shell on a worker node.
srun --pty --mem=4gb --qos=short $SHELL
After the job starts, the prompt will change to indicate you’re on a worker node. Next, start an interactive session in the container.
module load singularity singularity shell docker://unlhcc/tensorflow-gpu
This may take a few minutes to start. Again, the prompt will change and
Singularity to indicate you’re within the container.
Next, install the needed packages via
pip to a location somewhere in
work directory. For example,
$WORK/tf-gpu-pkgs. (If you are
using Python 3, use
pip3 instead of
export LC_ALL=C pip install --system --target=$WORK/tf-gpu-pkgs --install-option="--install-scripts=$WORK/tf-gpu-pkgs/bin" nibabel tables
You should see some progress indicators, and a
Successfully installed..." message at the end. Exit both the
container and the interactive SLURM job by typing
exit twice. The
above steps only need to be done once per each image you need additional
packages for. Be sure to use a separate location for each image’s
To make the packages visible within the container, you’ll need to add a
line to the submit script used for your Singularity job. Before the
lines to load the
singularity module and run the script, add a line
PYTHONPATH variable to the
#!/bin/sh #SBATCH --time=03:15:00 # Run time in hh:mm:ss #SBATCH --mem-per-cpu=4096 # Maximum memory required per CPU (in megabytes) #SBATCH --job-name=singularity-test #SBATCH --partition=gpu #SBATCH --gres=gpu #SBATCH --error=/work/[groupname]/[username]/job.%J.err #SBATCH --output=/work/[groupname]/[username]/job.%J.out export PYTHONPATH=$WORK/tf-gpu-pkgs module load singularity singularity exec docker://unlhcc/tensorflow-gpu python /path/to/my_tf_code.py
The additional packages should then be available for use by your Python code running within the container.
You can see all the available versions of the software built with Singularity in the table above. If you don’t specify a specific sofware version, Singulariy will use the latest one. If you want to use a specific version instead, you can append the version number from the table to the image. For example, if you want to use the Singularity image for Spades version 3.11.0, run:
singularity exec docker://unlhcc/spades:3.11.0 spades.py