Messages & Announcements

  • 2016-01-04:  SANDHILLS available for normal use
    Category:  General Announcement

    After a hardware failure the SANDHILLS head node has been temporarily replaced and job scheduling should work again. We plan to migrate the head node services to a more permanent solution at some point in the future and will schedule a downtime when that occurs. For now the system should be fully functional. Please email hcc-support@unl.edu with any questions or issues.


  • 2016-01-02:  SANDHILLS outage - job scheduling unavailable
    Category:  System Failure

    The SANDHILLS cluster is currently unavailable for running jobs due to hardware failure of the head node. The login node will remain up for the time being and files should be accessible. All scheduling functionality (sbatch, squeue, etc) on the other hand will almost certainly fail.

    We will work towards a solution first thing on Monday and send additional announcements as necessary.


  • 2015-11-28:  SANDHILLS available after power outage.
    Category:  System Failure

    Partial power outage in SANDHILLS resolved.


    A power outage on Thanksgiving Day caused some worker nodes in SANDHILLS to become unavailable. Power is restored and SANDHILLS is fully operational. It's likely that some jobs were killed because of the outage but we think that no files were impacted. Please send an email to hcc-support@unl.edu if you find any problems.

  • 2015-09-22:  Resources available via Crane
    Category:  General Announcement

    Dear HCC community,

    A new partition, called tmp_anvil, with 50+ nodes has been added to Crane temporarily. These nodes have two Intel E5-2650 v3 nodes, for a total of 20 cores running at 2.3GHz. Additionally, each node has 256GB of RAM. The only downsides to these nodes are that this partition is only temporary, and that they do not have InfiniBand. MPI jobs should be compiled as SMP if they are going to run on these nodes. Additionally, you will not be able to run on more than one node per job (but up to 20 tasks per node). SMP jobs needing a lot of memory is a good fit for these nodes. Per node these machines are now the most capable Intel boxes at HCC.


    To submit jobs to this partition from Crane, please use

    [ #SBATCH --partition=tmp_anvil ].

    Further questions should be directed to hcc-support@unl.edu.

    HCC has begun the process of building a local research cloud which we will name Anvil, an obvious reference to anvil clouds periodically seen on the great plains. The hardware for this machine has been purchased, but building the underlying infrastructure will be an ongoing, incremental process. In the mean time, we will provision some significant compute hardware as a part of Crane instead of letting it sit idle. We will warn you when we are ready to repurpose these nodes back to Anvil; when the nodes are repurposed, started jobs will be allowed to finish first.

  • 2015-08-24:  Tusker available for use
    Category:  General Announcement

    Tusker is back open for use. The work filesystem has been rebuilt and users may begin to write to the currently empty filesystem and start running jobs. Please contact hcc-support@unl.edu if you encounter any problems.