Using Singularity and Docker Containers

What is Singularity

Singularity is a containerization solution designed for high-performance computing cluster environments.  It allows a user on an HPC resource to run an application using a different operating system than the one provided by the cluster.  For example, the application may require Ubuntu but the cluster OS is CentOS.  Conceptually, it is similar to other container software such as Docker, but is designed with several important differences that make it more suited for HPC environments.  

  • Encapsulation of the environment
  • Containers are image based
  • No user contextual changes or root escalation allowed
  • No root owned daemon processes

Finding Images

Singularity can run images from a variety of sources, including both a flat image file or a Docker image from Docker Hub.

Docker Hub

Publically available Docker images can be found at Docker Hub. For convenience, HCC also provides a set of images on Docker Hub known to work on HCC resources.  

Available Images at HCC

The following table lists the currently available images and the command to run the software.

Software Version Command to Run Additional Notes
DREAM3D 6.3.29, 6.5.36, 6.5.137, 6.5.138 singularity exec docker://unlhcc/dream3d PipelineRunner
Spades 3.11.0 singularity exec docker://unlhcc/spades
Macaulay2 1.15, 1.9.2 singularity exec docker://unlhcc/macaulay2 M2 <options> Replace <options> with the desired options for Macaulay2.
CUDA (Ubuntu) 10.2 singularity exec docker://unlhcc/cuda-ubuntu <my CUDA program> Ubuntu 16.04.1 LTS w/CUDA Toolkit
TensorFlow GPU 1.4, 1.12, 1.14, 2.0.0, 2.1.0, 2.2.0 singularity exec docker://unlhcc/tensorflow-gpu python /path/to/ Provides Python 3.7
Keras w/Tensorflow GPU backend 2.0.4, 2.1.5, 2.2.4, 2.3.1 singularity exec docker://unlhcc/keras-tensorflow-gpu python /path/to/ Use python3 for Python3 code
Octave 4.2.1 singularity exec docker://unlhcc/octave octave
Sonnet GPU 1.13, 1.27, 1.34, 2.0.0 singularity exec docker://unlhcc/sonnet-gpu python /path/to/ Provides Python 3.7
Neurodocker w/ANTs 2.2.0, 2.3.4 singularity exec docker://unlhcc/neurodocker-ants <ants script> Replace <ants script> with the desired ANTs program
GNU Radio 3.7.11, singularity exec docker://unlhcc/gnuradio python /path/to/ Replace python /path/to/ with other GNU Radio commands to run
Neurodocker w/AFNI 17.3.00, 19.2.20, 19.2.21, 20.3.01 singularity exec docker://unlhcc/neurodocker-afni <AFNI program> Replace <AFNI program> with the desired AFNI program
Neurodocker w/FreeSurfer 6.0.0 singularity run -B <path to your FS license>:/opt/freesurfer/license.txt docker://unlhcc/neurodocker-freesurfer recon-all Substitute <path to your FS license> with the full path to your particular FS license file. Replace recon-all with other FreeSurfer commands to run.
fMRIprep 1.0.7 singularity exec docker://unlhcc/fmriprep fmriprep
ndmg 0.0.50 singularity exec docker://unlhcc/ndmg ndmg_bids
NIPYPE (Python2) 1.0.0 singularity exec docker://unlhcc/nipype-py27 <NIPYPE program> Replace <NIPYPE program> with the desired NIPYPE program
NIPYPE (Python3) 1.0.0 singularity exec docker://unlhcc/nipype-py36 <NIPYPE program> Replace <NIPYPE program> with the desired NIPYPE program
DPARSF 4.3.12 singularity exec docker://unlhcc/dparsf <DPARSF program> Replace <DPARSF program> with the desired DPARSF program
Caffe GPU 1.0, 1.0-136-g9b89154 singularity exec docker://unlhcc/caffe-gpu caffe Image provides Python 3.7. Matcaffe is included; load matlab/r2016b module and add -B $MATLAB_ROOT:/opt/matlab to the singularity options to use.
ENet Caffe GPU 427a014, 22d356c singularity exec docker://unlhcc/enet-caffe-gpu <ENET program> Replace <ENET program> with the desired ENET program
ROS Kinetic 1.3.1, 1.3.2 singularity exec docker://unlhcc/ros-kinetic <ROS program> Replace <ROS program> with the desired ROS program
Mitsuba 1.5.0 singularity exec docker://unlhcc/mitsuba mitsuba
FImpute 2.2 singularity exec docker://unlhcc/fimpute FImpute <control file> Replace <control file> with the control file you have prepared
Neurodocker w/FSL, 6.0.3, 5.0.11 singularity run docker://unlhcc/neurodocker-fsl <FSL program> Replace <FSL program> with the desired FSL program
gdc-client 1.4.0 singularity exec docker://unlhcc/gdc-client gdc-client <sub-command> Replace <sub-command> with the desired gdc-client sub-command
BLUPF90 1.0 singularity exec docker://unlhcc/blupf90 <command> Replace <command> with any command from the suite (blupf90, renumf90, etc.)
RMark 2.2.5 singularity exec docker://unlhcc/rmark Rscript my_r_script.r
SURPI 1.0.18 singularity exec docker://unlhcc/surpi -f </path/to/config> Replace </path/to/config> with the full path to your config file
PyTorch 1.0.1, 1.1.0, 1.2.0, 1.5.0 singularity exec docker://unlhcc/pytorch python /path/to/ This image includes both CPU and GPU support, and provides Python 3.7.
bioBakery 1.1, 3.0.0 singularity exec docker://unlhcc/biobakery <bioBakery program> Replace <bioBakery program> with the desired bioBakery program and its arguments
LIONS 0.2 singularity exec -B <resource directory>:/LIONS-docker/resources/<genomeName> -B <data_directory>:/LIONS-data docker://unlhcc/lions <path/to/parameter/ctrl> Replace <path/to/parameter/ctrl> with the path to your parameter file.
lyve-SET 1.1.4f singularity exec docker://unlhcc/lyve-set <lyve-SET program> Replace <lyve-SET program> with any command from the lyve-SET suite
RASTtk 1.3.0 singularity exec docker://unlhcc/rasttk <rasttk program> Replace <rasttk program> with any command from the RASTtk suite.
CellRanger 3.0.2, 3.1.0, 4.0.0, 6.1.2 singularity exec docker://unlhcc/cellranger cellranger <cellranger program> Replace <cellranger program> with any command from the CellRanger suite.
SkylineRunner 3.0.19158 singularity run -B $PWD:/data -B /tmp:/mywineprefix docker://unlhcc/skylinerunner mywine SkylineCmd <options> Replace $PWD with an absolute path if not running from the directory containing data.
MXNet GPU (Python only) 1.5.0, 1.6.0 singularity exec docker://unlhcc/mxnet-gpu python /path/to/ Provides Python 3.7.
ORFfinder 0.4.3 singularity exec docker://unlhcc/orffinder ORFfinder <options> Replace <options> with the available options for ORFfinder.
ARG_OAP 2.0 singularity exec docker://unlhcc/arg_oap <arg_oap program> Replace <arg_oap program> with the desired ARG_OAP program and its arguments.
CRISPRCasFinder 4.2.20, 4.2.19 singularity exec docker://unlhcc/crisprcasfinder -in </path/to/input/fasta> <options> -soFile /opt/CRISPRCasFinder/ Replace <path/to/input/fasta> with the path to your input fasta file, and replace <options> with the available options for CRISPRCasFinder.
WINE 5.0 singularity exec -B /tmp:/mywineprefix docker://unlhcc/wine-ubuntu mywine <windows program> Replace <windows program> with the full path to the Windows binary.
Slicer 4.10.2 singularity exec docker://unlhcc/slicer Slicer <options> Slicer comes with multiple CLI modules that are located in /home/slicer/lib/Slicer-4.10/cli-modules/ within the image. For example, to use Slicer with the module BRAINSFit, one can run Slicer as singularity exec docker://unlhcc/slicer Slicer --launch /home/slicer/lib/Slicer-4.10/cli-modules/BRAINSFit <options> where <options> are the available options for BRAINSFit.
Inverted Repeats Finder (IRF) 3.07 singularity exec docker://unlhcc/irf irf307.linux.exe
COMSOL 5.5, 5.6 singularity run -B $COMSOL_ROOT:/opt/comsol docker://unlhcc/comsol comsol batch <comsol args> This image does NOT include COMSOL itself. It is a thin wrapper to allow newer (>=5.5) versions of COMSOL to run in batch mode on Crane. You must also load the respective comsol module. On Rhino, using this image is not necessary - load and run comsol directly.
r-inla 20.03.17 singularity exec docker://unlhcc/r-inla Rscript /path/to/my/script.R Provides R 3.6 with the INLA package and tidyverse suite.
Blender 2.83.1 singularity exec docker://unlhcc/blender blender <options> Replace <options> with any of the Blender CLI arguments.
ASAP 1.9 singularity exec docker://unlhcc/asap ASAP
freesurfer 5.3, 6.0 `singularity run -B $MATLAB_ROOT:/opt/matlab docker://unlhcc/freesurfer recon-all Provides FreeSurfer 5.3, 6.0. Load a matlab module additionally to make it available in the container.
PATRIC 1.035 singularity exec docker://unlhcc/patric <p3-command> Provides the PATRIC command line interface. Replace <p3-command> with the specific PATRIC command to run.
AlphaFold 2.0.0, 2.2.0 singularity run -B /work/HCC/BCRF/app_specific/alphafold/2.2.0/:/data -B .:/etc --pwd /app/alphafold docker://unlhcc/alphafold <options> Replace <options> with any of the AlphaFold CLI arguments.
S3V2_IDEAS_ESMP d05d3e0 singularity exec docker://unlhcc/s3v2_ideas_esmp <options> Replace <options> with any of the S3V2_IDEAS_ESMP arguments.
APSIM Classic 7.9 singularity exec docker://unlhcc/apsim-classic Apsim.exe <input> Replace <input> with the APSim input filename.

If you would like to request an image to be added, please fill out the HCC Software Request Form and indicate you would like to use Singularity.

Use images on HCC resources

To use Singularity on HCC machines, first load the singularity module. Singularity provides a few different ways to access the container. Most common is to use the exec command to run a specific command within the container; alternatively, the shell command is used to launch a bash shell and work interactively.  Both commands take the source of the image to run as the first argument.  The exec command takes an additional argument for the command within the container to run.

Finally, pass any arguments for the program itself in the same manner as you would if running it directly.  For example, the Spades Assembler software is run using the Docker image unlhcc/spades and via the command To run the software using Singularity, the commands are:

Run Spades using Singularity
module load singularity
singularity exec docker://unlhcc/spades <spades arguments>

Use images within a SLURM job

Using Singularity in a SLURM job is similar to how you would use any other software within a job. Load the module, then execute your image:

Example Singularity SLURM script
#SBATCH --time=03:15:00          # Run time in hh:mm:ss
#SBATCH --mem-per-cpu=4096       # Maximum memory required per CPU (in megabytes)
#SBATCH --job-name=singularity-test
#SBATCH --error=/work/[groupname]/[username]/job.%J.err
#SBATCH --output=/work/[groupname]/[username]/job.%J.out

module load singularity
singularity exec docker://unlhcc/spades <spades arguments>

Create a custom image

Custom images can be created locally on your personal machine and added to Docker Hub for use on HCC clusters. More information on creating custom Docker images can be found in the Docker documentation.

You can create custom Docker image and use it with Singularity on our clusters. Singularity can run images directly from Docker Hub, so you don’t need to upload anything to HCC. For this purpose, you just need to have a Docker Hub account and upload your image there. Then, if you want to run the command “mycommand“ from the image “myimage”, type:

module load singularity
singularity exec docker://myaccount/myimage mycommand

where “myaccount” is your Docker Hub account.

In case you see the error ERROR MANIFEST_INVALID: manifest invalid when running the command above, try:

module load singularity
singularity exec docker://myaccount/myimage mycommand

If you get the error FATAL: kernel too old when using your Singularity image on the HCC clusters, that means the glibc version in your image is too new for the kernel on the cluster. One way to solve this is to use lower version of your base image (for example, if you have used Ubuntu:18.04 please use Ubuntu:16.04 instead).

All the Dockerfiles of the images we host on HCC are publicly available here. You can use them as an example when creating your own image.

Add packages to an existing image

Alternatively, instead of building an image from scratch, you can start with an HCC-provided image as the base for your Dockerfile (i.e. FROM unlhcc/spades) and add any additional packages you desire.

The following method only works for installing packages via pip that have no system-level dependencies. If you require packages that are installed via yum or apt, you will need to create a custom image.

Unfortunately it’s not possible to create one image that has every available Python package installed for logistical reasons.  Images are created with a small set of the most commonly-used scientific packages, but you may need others.  If so, you can install them in a location in your $WORK directory and set the PYTHONPATH variable to that location in your submit script.  The extra packages will then be “seen” by the Python interpreter within the image.  To ensure the packages will work, the install must be done from within the container via the singularity shell command.  For example, suppose you are using the tensorflow-gpu image and need the packages nibabel and tables.  First, run an interactive SLURM job to get a shell on a worker node.

Run an interactive SLURM job
srun --pty --mem=4gb --qos=short --gres=gpu --partition=gpu $SHELL

The --gres=gpu --partition=gpu options are used here as the tensorflow-gpu image is GPU enabled. If you are using a non-GPU image, you may omit those options. See the page on submitting GPU jobs for more information.

After the job starts, the prompt will change to indicate you’re on a worker node.  Next, start an interactive session in the container.

Start a shell in the container
module load singularity
singularity shell docker://unlhcc/tensorflow-gpu

This may take a few minutes to start.  Again, the prompt will change and begin with Singularity to indicate you’re within the container.

Next, install the needed packages via pip to a location somewhere in your work directory.  For example, $WORK/tf-gpu-pkgs.  (If you are using Python 3, use pip3 instead of pip).

Install needed Python packages with pip
export LC_ALL=C
pip install --system --target=$WORK/tf-gpu-pkgs --install-option="--install-scripts=$WORK/tf-gpu-pkgs/bin" nibabel tables

You should see some progress indicators, and a “Successfully installed..." message at the end.  Exit both the container and the interactive SLURM job by typing exit twice.  The above steps only need to be done once per each image you need additional packages for.   Be sure to use a separate location for each image’s extra packages.

To make the packages visible within the container, you’ll need to add a line to the submit script used for your Singularity job.  Before the lines to load the singularity module and run the script, add a line setting the PYTHONPATH variable to the $WORK/tf-gpu-pkgs directory. For example,

Example SLURM script
#SBATCH --time=03:15:00          # Run time in hh:mm:ss
#SBATCH --mem-per-cpu=4096       # Maximum memory required per CPU (in megabytes)
#SBATCH --job-name=singularity-test
#SBATCH --partition=gpu
#SBATCH --gres=gpu
#SBATCH --error=/work/[groupname]/[username]/job.%J.err
#SBATCH --output=/work/[groupname]/[username]/job.%J.out
export PYTHONPATH=$WORK/tf-gpu-pkgs
module load singularity
singularity exec docker://unlhcc/tensorflow-gpu python /path/to/

The additional packages should then be available for use by your Python code running within the container.

What if I need a specific software version of the Singularity image?

You can see all the available versions of the software built with Singularity in the table above. If you don’t specify a specific sofware version, Singulariy will use the latest one. If you want to use a specific version instead, you can append the version number from the table to the image. For example, if you want to use the Singularity image for Spades version 3.11.0, run:

singularity exec docker://unlhcc/spades:3.11.0