CryoSPARC is a complete software solution for processing cryo-electron microscopy (cryo-EM) data. On HCC resources, CryoSPARC can be accessed as an Interactive GUI App via the Swan Open OnDemand (OOD) portal.
Launching CryoSPARC via the Open OnDemand portal starts an interactive Desktop that runs the CryoSPARC “master” process along with a single worker process on the allocated compute node. This is very similar to a Single Workstation install as described in the CryoSPARC documentation on a machine with modest resources. While one can use the interactive Desktop to perform CryoSPARC analyses, the Desktop is mostly used to submit computationally intensive (including GPU) CryoSPARC jobs to the cluster via SLURM. Therefore, launching the Desktop does not require significant CPU or RAM resources, nor a GPU node. On the other hand, the submitted CryoSPARC jobs can take a full advantage of GPU nodes. The particular Swan GPU nodes used for the submitted CryoSPARC jobs can be set using the Show advanced settings… checkbox in the CryoSPARC OOD Form.
The CryoSPARC App and submitted tasks run as SLURM jobs under your personal HCC account. Similar to any other SLURM job, you have a full control over the processes and the data files. The CryoSPARC projects and data files on Swan can only be accessed by you, unless you explicitly choose to share them.
This will open a form that needs to be filled with information regarding CryoSPARC and the resources requested. Below is explanation of some of the required fields to launch the CryoSPARC App:
$WORK/cryosparc
. If you select this checkbox, all the existing database and configuration files in the CryoSPARC session location will be erased. Please select this checkbox only if needed.batch
, or any other leased partition you have access to.In addition to the basic fields, there are two advanced settings you can set under the Show advanced settings… checkbox in the CryoSPARC OOD Form:
$WORK/cryosparc
. You can change this location using the Select Path button. You can select any of the available file systems on Swan aside from the $HOME
filesystem. We do not recommend using $HOME
for the session folder. Please note that each file system has its own advantages and disadvantages. Note this location is *not* the same as the project path(s) where projects are stored. This location is used for CryoSPARC’s internal database and configuration. Do not change this value unless you are absolutely sure of the ramifications.gpu
partition. If you have access to a priority access partition with GPU resources, you may specify that partition here to reduce queue time.highmem
cluster lane (described below). That is, the preset memory values within CryoSPARC for each job type will be multiplied by this factor. Certain input sets combined with particular options may require more memory than the preset CryoSPARC values, and will otherwise fail. Use this value in combination with using the highmem
cluster lane to increase the requested memory on specified jobs to allow them to complete successfully. The valid range are integer values from 1 to 10.After selecting the needed resources and pressing “Launch”, the CryoSPARC App will start.
Depending on the requested resources, this process should take a few minutes.
When the App is ready, the CryoSPARC OOD Info Card will show information about the email and password needed to login to CryoSPARC, as well as some other useful information, such as the Login URL and a few auxiliary programs.
The email address value to use is <username>@swan.unl.edu
, where <username>
is replaced with your HCC username. Please note that this is not a functioning email address, and is only used for logging into CryoSPARC on Swan.
Once the session is ready, you will see a “Launch CryoSPARC:Swan” button.
This button will start the CryoSPARC interactive Desktop and open the Firefox browser to the CryoSPARC login page.
Please use the provided email and password to login to the CryoSPARC Firefox session and start using CryoSPARC.
Once you create CryoSPARC Job and are ready to Queue
it, you can select one of the two available lanes to run the job on - default
or swan
.
default (node)
uses the current compute node to run the job on. This job will use the resources and partition selected in the CryoSPARC OOD Form. This will usually be a modest amount of resources (1-2 cores and 8GBs of RAM), so the default lane
should only be used for short-running, light tasks.swan (cluster)
uses SLURM to submit the CryoSPARC job to the cluster. These jobs have a maximum runtime of 7 days, use the number of CPUs or GPUs you have specified when creating the CryoSPARC job and use the partition you have selected under the Show advanced settings… checkbox in the CryoSPARC OOD Form. The submitted CryoSPARC jobs will run in the background, and depending on the requested resources and partition utilization, sometimes it may take a bit before they start running. The swan
lane should be used for the majority of the computing tasks.swan-highmem (cluster)
is otherwise identical to the swan
lane, but increases the amount of memory (RAM) requested by a constant factor. Certain jobs may require more memory than the preset values within CryoSPARC. This lane can be used for jobs that will otherwise fail in the standard swan
lane due to insufficient memory. The default is to multiply the preset CryoSPARC values by a factor of 4, but this factor may be changed when the CryoSPARC app is launched under advanced settings. Using this lane when it is not needed will result in longer queue times.Please note that the “master” process should be running while the submitted CryoSPARC jobs are queued and running.
In addition to CryoSPARC, the CryoSPARC OOD App provides a few auxiliary programs, such as wrapper scripts for Topaz and deepEMhancer.
The executable paths to these scripts need to be set manually on a project-level.
Once this is done, the set location will apply to all future newly created jobs.
/usr/local/bin/topaz.sh
/usr/local/bin/deepemhancer.sh
/usr/local/share/deepemhancer_models/
The default location of the CryoSPARC database is $WORK/cryosparc
.
If you want to test the CryoSPARC OOD App, you can use the CryoSPARC Introductory Tutorial.
Log Out
from either via the deskop menu in the upper left, or the account name in the upper right"Delete"
button on OOD.Log Out
from the main desktop menu in the upper left corner within the CryoSPARC App when all jobs are no longer running or queued. You may then start the App again later and resume your progress.*.lock
files in the used CryoSPARC session directory.Login URL
is printed on the CryoSPARC OOD Info Card.$WORK/.ondemand/batch_connect/sys/bc_hcc_cryosparc/swan/output/<session_id>/output.log
, where <session_id>
should be replaced with the Session ID printed in the CryoSPARC Info Card. The format of the Session ID is xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
.job.log
file in the Project directory.default
lane.
sacct --format=JobId,JobName%50,State,Node,Elapsed,MaxRSS,ReqMem
default
lane is always ondemand/sys/dashboard/sys/bc_hcc_cryosparc/swan
.swan
or swan-highmem
lanes is always in the format cryosparc_<project_uid>_<job_uid>
, where <project_uid>
and <job_uid>
are replaced with the CryoSPARC Project ID and CryoSPARC Job ID respectively. For example, if your CryoSPARC Project ID is 3, and your CryoSPARC Job ID is 187, the SLURM Job Name will be cryosparc_P3_J187
.cs.lock
file in the project directory of an inoperable instance as explained in the CryoSPARC documentation.default
lane, please increase the value in the Requested RAM in GBs field from the CryoSPARC Open OnDemand Form.swan
lane, please try the swan-highmem
lane instead and increase the Highmem factor value in the CryoSPARC Open OnDemand Form accordingly.If you have any questions or encounter any issues with the CryoSPARC OOD App, please email hcc-support@unl.edu.