The Holland Computing Center supports a diverse collection of research computing hardware. Anyone in the University of Nebraska system is welcome to apply for an account on HCC machines.
Access to these resources is by default shared with the rest of the user community via various job schedulers. These policies may be found on the pages for the various resources. Alternatively, a user may buy into an existing resource, acquiring ‘priority access’. Finally, several machines are available via Condor for opportunistic use. This will allow users almost immediate access, but the job is subject to preemption.
Crane: Crane is the newest and most powerful HCC resource . If you are new to using HCC resources, Crane is the recommended cluster to use initially. Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per node. If your job requires more than 16 cores per node or you need more than 64GB of memory, consider using Tusker instead.
Tusker: Similar to Crane, Tusker is another cluster shared by all campus users. It has 4 CPU/ 64 cores and 256GB RAM per nodes. Two nodes have 512GB RAM for very large memory jobs. So for jobs requiring more than 16 cores per node or large memory, Tusker would be a better option.
Logging into Crane or Tusker
ssh <username>@crane.unl.edu or ssh <username>@tusker.unl.edu
Duo two-factor authentication is required for access to HCC resources. Registration and usage of Duo security can be found in this section: Setting up and using Duo
/homedirectories. You must use your
/workdirectory for processing in your job. You may access your work directory by using the command:
$ cd $WORK
|Crane||548 node Production-mode LINUX cluster||452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node
116 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node
|452 nodes @ *64GB
79 nodes @ **256GB
37 nodes @ ***512GB
EDR Omni-Path Architecture
|~1.8 TB local scratch per node
~4 TB local scratch per node
~1452 TB shared Lustre storage
|Tusker||82 node Production-mode LINUX cluster||Opteron 6272 2.1GHz, 4 CPU/64 cores per node||**256 GB RAM per node
***2 Nodes with 512GB per node
****1 Node with 1024GB per node
|QDR Infiniband||~500 TB shared Lustre storage
~500GB local scratch
|Red||344 node Production-mode LINUX cluster||Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage||1.5-4GB RAM per job slot||1Gb, 10Gb, and 40Gb Ethernet||~6.67PB of raw storage space|
|Anvil||76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud||76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node||76 nodes @ 256GB||10Gb Ethernet||528 TB Ceph shared storage (349TB available now)|
You may only request the following amount of RAM: