Supercomputing Mini Workshop 2013

Supercomputing Mini Workshop - February 27, 2013

The materials found on this page were applicable at the time of the event. When referencing these, please check current documentation to ensure the resources are still available.A list of currently available resources can be found on the Submitting Jobs page

In this hour-long mini workshop, you will obtain hands-on experience performing a simple calculation (summing from 1 to 16) with a supercomputer, using both serial and parallel computing. No prior knowledge of programing is required. The only thing you need to do is follow the cheat sheets step by step (copy & paste)! We aim to give the framework of the standard supercomputing process and demonstrate that it does not require huge efforts for one to take advantage of state-of-the-art supercomputing resources. 

Logging In

ssh crane.unl.edu -l demoXXXX

Cypwin Link

Preparation

Download the demo code.

Copy the two folders to the clipboard 

  1. /getting\_started/hcc\_tusker/serial\_f90<
  2. /getting\_started/hcc\_tusker/parallel\_f90

On your local computer:

$ cd ~
$ mkdir demo_code

Next, search in your computer for the folder demo\_code, and paste the two folders, serial\_f90 and parallel\_f90, in this folder

$ ls
$ scp -r ./demo_code <username>@crane.unl.edu:/work/demo/<username>
<enter password>

Serial Job

First, you need to login to the cluster

$ ssh <username>@crane.unl.edu
<enter password>

Next, you will move to the working filesystem:

$ cd /work/demo/<username>
$ ls
$ cd demo_code
$ ls
$ cd serial_f90
$ ls

Now, load the compiler configuration:

$ module load compiler/intel/12

And compile the fortran code:

$ ifort fortran_serial.f90 -o fortran_serial.x

 Next, you will submit the job to the cluster scheduler using a file submit_tusker.serial.  After submitting the job, the cluster will schedule your job to run on a node in the cluster.

$ qsub submit_tusker.serial

You can watch the status of the job using showq

$ showq -u username

After the job has completed, it will disappear from the showq command.  You can check the output of the program from demo.out:

$ cat demo.out

Parallel Job 

Again, logged into the cluster as in the Serial job, change to the working directory:

$ cd /work/demo/<username>/demo_code/parallel_f90
$ ls

Then load both the compiler and the MPI (Message Passing Interface) for parallel applications.

$ module load compiler/intel/12
$ module load openmpi/1.6

Next, you will compile the special parallelized version of the summation code.  It uses MPI for communication between the parallel processes.

$ mpif90 fortran_mpi.f90 -o fortran_mpi.x

Next, we will submit the MPI application to the cluster scheduler using the file submit_tusker.mpi.

$ qsub submit_tusker.mpi

The cluster scheduler will pick machines (possibly several, depending on availability) to run the parallel MPI application.  You can check the status of the job the same way you did with the Serial job:

$ showq -u <username>

Once you see the job has disappeared from the output of showq, you can look at the output using the command:

$ cat demo.out

Useful Commands

Command Description
$ pwd Print Working Directory - Show the directory that you are currently working in.
$ cd <directory> Change Directory - Change the working directory to another.
$ cd .. Change Directory to the one above the current.
$ cat <filename> Print the contents of a file
$ showq -u <username> Show the status of jobs for user <username>.
$ qdel <jobid> Remove the job <jobid> from the queue. Stop job if it is running.