Messages & Announcements

  • 2018-12-21:  Tusker Retirement approaches
    Category:  General Announcement

    Dear HCC,

    This announcement concerns Tusker only.

    Tusker's retirement is currently planned for February 18, 2019. At that time, all files on the /work filesystem (Tusker only) will be permanently unavailable. Between now and Feb. 18, any files on the Tusker /work filesystem must be moved -- there is no backup. Files on /work on all HCC clusters are not backed up. This includes Tusker, and /work will be removed when Tusker is retired.

    A cluster consisting of components of Tusker and the recently retired Sandhills will be assembled at a later date, but files currently on /work will not be available after February 18. Please plan accordingly.

    HCC supports a new filesystem called /common that has a 30 TB quota per research group. Files from Tusker's /work filesystem may be moved to /common if desired. /common is not purged, is mounted on all HCC clusters, but is not backed up. Files that cannot be lost should be backed up elsewhere. HCC provides a filesystem named Attic for this purpose which costs $25/TB/year.

    Please contact hcc-support@unl.edu with any questions (answers will be sent after the shutdown).

    I do hope you have a happy holidays.

    Best regards,
    David Swanson, HCC Director


  • 2018-12-21:  HCC operations during the shutdown
    Category:  General Announcement

    In accordance with the NU holiday schedule, HCC staff will be on break from Saturday, December 22, 2018, through Wednesday, January 2, 2019. All HCC resources will continue to be operational during this break. HCC staff will be monitoring the systems to ensure availability through the break.

    HCC User Services staff will be periodically monitoring the ticketing system during the break and will address any system critical issues. Non-critical tickets/issues will be addressed when we return after the winter break on January 3rd, 2019. Please email hcc-support@unl.edu if you have any questions.

    Happy Holidays!
    David Swanson, HCC Director


  • 2018-12-19:  Crane: system is available for use, /work filesystem checks ongoing
    Category:  General Announcement

    Checks on the two storage targets were completed this morning. The results of the scan for one of the targets requires us to make a full consistency scan of Crane's /work filesystem. This process can be done concurrently while /work is being used by jobs.

    Please check the status of your jobs if they were running at the time of the failure. These jobs may have experienced I/O errors if they were utilizing files that were backed by these two targets.


    Checks on the two storage targets were completed this morning. The results of the scan for one of the targets requires us to make a full consistency scan of Crane's /work filesystem. This process can be done concurrently while /work is being used by jobs.

    Please check the status of your jobs if they were running at the time of the failure. These jobs may have experienced I/O errors if they were utilizing files that were backed by these two targets.

  • 2018-12-19:  Crane: /work filesystem unplanned downtime
    Category:  System Failure

    The /work filesystem for Crane is partially unavailable. One of the storage servers had two storage targets go offline around 12:07am today. The storage server will be rebooted and have filesystem checks performed.

    Pending jobs will be held until the maintenance is complete.

    A follow-up announcement will be sent when the system is brought to a production state.


    The /work filesystem for Crane is partially unavailable. One of the storage servers had two storage targets go offline around 12:07am today. The storage server will be rebooted and have filesystem checks performed.

    Pending jobs will be held until the maintenance is complete.

    A follow-up announcement will be sent when the system is brought to a production state.

  • 2018-11-28:  Crane: GPU driver update completed
    Category:  Maintenance

    The GPU driver updates have been successfully completed and the GPU nodes are back in service.

    **If you are using your own conda environments for GPU jobs, please note:**
    You may need to update certain packages in your conda environment(s) to newer versions in order to avoid errors with the updated drivers. We highly recommend running a small test job to check functionality before resuming large-scale jobs. If you encounter any issues, please contact us at hcc-support@unl.edu and we will be happy to assist you.

    Please contact hcc-support@unl.edu with any questions or issues regarding this maintenance.


Pages