Messages & Announcements

  • 2019-02-22:  Tusker Resources Reduced Capacity
    Category:  General Announcement

    This announcement is relevant to Tusker usage only. Tusker computing resources are almost completely reserved at this time; the entirety of the remaining cluster is currently being leased. Leased (priority access) usage will continue until March 18. The login node remains generally available, and users may access files as well. Opportunistic computing via the guest partition will also be available until the move date of March 18th. There is one large memory node still available for general use, but most jobs will just queue up and wait at this point. When the cluster is moved March 18th, files stored on /work on Tusker will no longer be available (the filesystem will be physically and logically rebuilt following the move). The cluster will come back online later this spring, at which time several nodes from the former Sandhills cluster will be combined with the current Tusker hardware in a new cluster to be known as Rhino going forward.


    Users without priority access who wish to compute on Tusker in the mean time may do so in opportunistic mode. Please see HCC documentation here: https://hcc.unl.edu/docs/guides/submitting_jobs/partitions/tusker_available_partitions/

    You may use the guest partition to run jobs on Tusker on nodes that are currently leased by others. These jobs have the potential to be preempted, and so jobs should frequently output restart files. Please check with HCC staff if you have any questions at hcc-support@unl.edu.

  • 2019-02-22:  Tusker Resources Reduced Capacity
    Category:  General Announcement

    This announcement is relevant to Tusker usage only. Tusker is largely unavailable for computing starting now due to the entirety of the remaining cluster currently being leased by various users. Leased (priority access) usage will continue until March 18 unchanged. The login node remains active, and users may access files as well as opportunistic computing via the guest partition until the move date of March 18th. There is one large memory node still available for general use, but most jobs will just queue up and wait at this point. When the cluster is moved March 18th, files stored on /work on Tusker will no longer be available. The cluster will come back online later this spring, at which time several nodes from the former Sandhills cluster will be combined with the current Tusker hardware in a new cluster to be known as Rhino going forward.


    Users who wish to attempt to compute on Tusker in the mean time in opportunistic mode should refer to HCC documentation here: https://hcc.unl.edu/docs/guides/submitting_jobs/partitions/tusker_available_partitions/

    You may use the guest partition to run jobs on Tusker on nodes that are currently leased by others. These jobs have the potential to be preempted, and so should frequently output restart files. Please check with HCC staff if you have any questions at hcc-support@unl.edu.

  • 2019-02-11:  Anvil: Unplanned downtime resolved
    Category:  System Failure

    The CEPH storage system for Anvil is functioning again and Anvil is available for use. Please check your VM instances to ensure they are operating correctly and contact hcc-support@unl.edu if you have further questions.


  • 2019-02-11:  Anvil: Unplanned downtime due to issue with storage system
    Category:  System Failure

    The CEPH storage system for Anvil is currently experiencing problems. Expect VMs hosted on Anvil to be slow or unresponsive until the problem is resolved.


    The CEPH storage system for Anvil is currently experiencing problems. Expect VMs hosted on Anvil to be slow or unresponsive until the problem is resolved.

  • 2019-02-08:  Crane: JupyterHub maintenance complete
    Category:  General Announcement

    The JupyterHub service maintenance is complete. Please contact us at hcc-support@unl.edu if you have any questions or issues.


Pages