Messages & Announcements

  • 2012-04-10:  Correction: SANDHILLS Infiniband outage 4/16
    Category:  Maintenance

    Correction from previous announcement: SANDHILLS Infiniband outage 4/16 (not 4/23)


    Correction from previous announcement: SANDHILLS Infiniband outage 4/16 (not 4/23)

  • 2012-04-10:  SANDHILLS Infiniband outage 4/23
    Category:  Maintenance

    On 4/23 the Infiniband network used by SANDHILLS will be unavailable in order to install upgraded hardware. Serial jobs and parallel jobs not using Infiniband (i.e. shared memory) will not be affected.


    On 4/23 the Infiniband network used by SANDHILLS will be unavailable in order to install upgraded hardware. Serial jobs and parallel jobs not using Infiniband (i.e. shared memory) will not be affected. This outage will start Monday afternoon and probably last the afternoon.

  • 2012-04-12:  HCC User Meetings
    Category:  General Announcement

    With the opening of Tusker to the HCC community,
    HCC User Meetings are coming soon to all 3 campuses with significant HCC user numbers: UNL (April 25th, http://hcc.unl.edu/presentations/event.php?ideventof=9); UNMC (April 27th, http://hcc.unl.edu/presentations/event.php?ideventof=1); and UNO/PKI (April 30th, http://hcc.unl.edu/presentations/event.php?ideventof=8).

    Registration is free, but please register for the event of your choice at the pages listed above to assist us in planning. In addition, all are invited to a reception to celebrate the installation of Tusker at 5pm, Schorr Center, UNL.


  • 2012-04-04:  Tusker Open for Use
    Category:  General Announcement

    We are pleased to announce that HCC's newest cluster "tusker" is now available to the HCC user community. This cluster of 104 nodes each containing 64 cores and 256 GB RAM (4 GB/core), scores an HPL benchmark of 43.3 TFLOPS, or twice the mark registered by its predecessor "firefly." Firefly will remain in production for the next several months to allow users to conveniently transition to the new resource.

    Various details may be found at the following FAQ: http://hcc.unl.edu/hcccreditedit/faq.php?tp=Tusker

    Briefly, Tusker has two physical filesystems, /home and /work. /home will employ group quotas of 10 GB, while /work is a 360 TB Lustre filesystem intended for temporary working set files. /work has a liberal group quota of 50 TB, but files are subject to removal if the system nears capacity. /home is read only from all worker nodes (where jobs will run). Thus, users must configure their code to write initial output to /work. Users should be aware this is a new system in entirety; all code, even if it was previously tested on Tusker, should be recompiled. Submission scripts will be very similar, but slightly different, as well.

    The same account information will allow you to log in and try out the new resource. Please contact hcc-support@unl.edu if you have any questions or encounter any trouble.

    Best regards,
    David

    David R. Swanson, Director
    Holland Computing Center
    118K Schorr Center
    University of Nebraska-Lincoln
    402-472-5006


  • 2012-04-01:  Tusker Downtime followed by Opening
    Category:  General Announcement

    HCC recently finished deploying a new cluster named "TUSKER". A limited number of test users have been putting it through an initial stress test recently, and the results have been positive enough to warrant placing it into production as of Thursday, April 5. There are some hardware and software upgrades and updates that will require a downtime on Wednesday, April 4, starting at 9am. Current test users should be aware all jobs will be killed for this downtime.


    Tusker will be sufficiently distinct from other existing HCC clusters that new compilation and/or submission scripts will be required.

    More information will be sent soon. Users may note the FAQ at
    http://hcc.unl.edu/hcccreditedit/faq.php?tp=Tusker