The Holland Computing Center supports a diverse collection of research computing hardware. Anyone in the University of Nebraska system is welcome to apply for an account on HCC machines.
Access to these resources is by default shared with the rest of the user community via various job schedulers. These policies may be found on the pages for the various resources. Alternatively, a user may buy into an existing resource, acquiring ‘priority access’. Finally, several machines are available via Condor for opportunistic use. This will allow users almost immediate access, but the job is subject to preemption.
To begin using HCC resources:
Swan: Swan is the newest and most powerful HCC resource. If you are new to using HCC resources, Swan is the recommended cluster to use initially. Swan has 2 Intel Icelake CPUs (56 cores) per node, with 256GB RAM per node.
Crane: Crane is the largest HCC resource. Limitations: Crane has only 2 CPU/16 cores and 64GB RAM per node. CraneOPA has 2 CPU/36 cores with a maximum of 512GB RAM per node.
Important Notes
/home
directories. You must
use your /work
directory for processing in your job. You may
access your work directory by using the command:
$ cd $WORK
Cluster | Overview | Processors | RAM* | Connection | Storage |
---|---|---|---|---|---|
Crane | 572 node LINUX cluster | 452 Intel Xeon E5-2670 2.60GHz 2 CPU/16 cores per node 120 Intel Xeon E5-2697 v4 2.3GHz, 2 CPU/36 cores per node (“CraneOPA”) |
452 nodes @ 62.5GB 79 nodes @ 250GB 37 nodes @ 500GB 4 nodes @ 1500GB |
QDR Infiniband EDR Omni-Path Architecture |
~1.8 TB local scratch per node ~4 TB local scratch per node ~1452 TB shared Lustre storage |
Swan | 168 node LINUX cluster | 168 Intel Xeon Gold 6348 CPU, 2 CPU/56 cores per node | 168 nodes @ 256GB 2 nodes @ 2000GB |
HDR100 Infiniband | 3.5TB local scratch per node ~5200TB shared Lustre storage |
Red | 344 node LINUX cluster | Various Xeon and Opteron processors 7,280 cores maximum, actual number of job slots depends on RAM usage | 1.5-4GB RAM per job slot | 1Gb, 10Gb, and 40Gb Ethernet | ~10.8PB of raw storage space |
Anvil | 76 Compute nodes (Partially used for cloud, the rest used for general computing), 12 Storage nodes, 2 Network nodes Openstack cloud | 76 Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores per node | 76 nodes @ 256GB | 10Gb Ethernet | 528 TB Ceph shared storage (349TB available now) |
* Due to overhead for the operating system and hardware, the maximum available memory is lower than the total installed memory. Requesting more may result in your job not running, being delayed, or running on a smaller number of nodes.