The Holland Computing Center (HCC) provides advanced cyberinfrastructure, software, and expertise to support the University of Nebraska (NU) researchers in Artificial Intelligence (AI), Machine Learning (ML), and other research computing. With high-performance and high-throughput computing clusters, GPU resources, large-scale storage systems, and specialized software environments, HCC facilitates researchers to develop, run, and scale modern AI/ML workflows. These resources are supported by expert staff, who design and maintain infrastructure, manage software deployment, work closely with researchers, and provide targeted training to bridge the gap between research needs and computing capabilities, supporting NU's research and education needs.
AI Capable Resources at HCC
The main high-performance computing resource, called Swan, is the primary method by which to utilize GPUs for AI research within HCC. At present, the Swan cluster contains the following GPUs in a variety of node configurations. Many of the GPUs are owned or leased by research faculty, giving them priority access, but all are available in an opportunistic manner to anyone with an HCC account.
Swan's GPU inventory consist of:
- 4× A100 (80 GB)
- 14× A30 (24 GB)
- 8× L40S (48 GB)
- 3× H100 (96 GB)
- 4× V100 (16 GB)
- 62× V100S (32 GB)
- 24× T4 (16 GB)
- 7× RTX 5000 (16 GB)
- 2× RTX 8000 (48 GB)
- 24× A6000 (48 GB)
In addition to Swan's current inventory of GPUs, additional GPUs will be added as part of NSF Grant #2430234. All new GPUs from the grant will be fully and freely available to NU researchers.
- 6× NVIDIA H200 NVL (141 GB)
- 52× NVIDIA L40S (48 GB)
To enable high performance AI workflows, HCC also provides researchers with a variety of different storage options at no additional cost except for Attic storage.
- Work – High-performance global scratch (100 TiB/group, 6-month automatic purge)
- NRDStor – General-purpose (50 TiB/group, no purge)
- Home – Swan user home directories (20 GiB/user, no purge)
- Attic – Near-line archival ($28/TiB, no purge, offsite replication)
- Local /scratch – Node-local flash storage for running jobs
Extended Resources
Through the National Research Platform (NRP), HCC facilitates access to ~30,000 CPU cores and 1,500 GPUs across 80+ sites, many hosted at HCC. NRP also offers managed large language model (LLM) services via API or JupyterHub.System Team Expertise
Expert in designing and managing large-scale, high-performance, high-throughput cyberinfrastructure for AI, ML, and other research computing, including NU’s Swan cluster and NSF-supported systems such as the CMS Red cluster, PATh, NRP, OSG, and NRDStor, supporting research efforts at regional, national, and international scales.Software and Applications
HCC provides a wide range of pre-installed software packages, including popular AI/ML frameworks, domain-specific tools, and research applications. Software is managed via the module system on Swan. Available software on Swan : https://hcc.unl.edu/docs/applications/modules/available_software_for_swan/ Examples include:- General AI/ML platforms and tools: Anaconda, Mamba, Apptainer, CUDA, TensorFlow, PyTorch, Jupyter Notebook, LMStudio, etc.
- Domain-specific AI/ML software: AlphaFold for AI-based protein structure prediction, fair-esm (pre-trained transformer models) for protein analysis, GROMACS for molecular dynamics, Ollama for large language models (LLMs), etc.
- Shared AI/ML datasets: Access to popular datasets like ImageNet, TCGA, ASHS, etc. Many AI/ML software packages (e.g., AlphaFold, RoseTTAFold) require large datasets, and HCC has pre-downloaded and configured these system-wide, where possible, to avoid quota and purge policy issues. When it is unclear if a required dataset is already available, the module information should be checked or hcc-support@unl.edu contacted before attempting a download.
Requesting software:
Submit a software installation request here: https://hcc.unl.edu/software-installation-requestApplication team expertise:
Expert in software deployment, researcher engagement and training, helping faculty adapt workflows to leverage HCC resources and bridge the gap between research needs and computing capabilities.Training
HCC actively engages with NU researchers to identify computing needs, support collaborative proposals, and participate in national and regional computing initiatives.To see upcoming training events, visit the HCC Upcoming Events Page
Regular engagement:
- Open Office Hours: Online via Zoom every Tuesday & Thursday, 2–3 p.m. (CT), for direct technical support. Zoom link: http://go.unl.edu/HCChelp
- General Support: Users can email hcc-support@unl.edu, or submit a new ticket via the web.
- One-on-One Appointments: Individual consultations to discuss research needs, explore computing solutions, and provide guidance on using computing resources.
Sample AI/ML-related training events:
- Introduction to Artificial Intelligence and Machine Learning using HCC
- Introduction to Using the National Research Platform
- GP-ENGINE – Migrating AI/ML Workflows to Nautilus
- GP-ENGINE Nautilus Tutorial
Full list of HCC training events: https://hcc.unl.edu/past-events
How to Access HCC Resources
Access to HCC resources is given under HCC groups owned by NU faculty.- NU faculty without an existing group should complete the group application first.
- Researchers can request access under a specific HCC group through the new user request form.
- Once approved, HCC staff assist with onboarding accounts, groups, and resource allocations for research teams.
Most HCC resources are available to users on a shared basis at no cost, while Priority Access is offered on a cost-recovery basis.
If you have any questions about HCC's resources or how to get started, please email hcc-support@unl.edu, or drop by one of our Open Office Hours: Online via Zoom every Tuesday & Thursday, 2–3 p.m. (CT). Zoom link: http://go.unl.edu/HCChelp