Partitions are used on Swan to distinguish different
resources. You can view the partitions with the command sinfo
.
To run short jobs for testing and development work, a job can specify a different quality of service (QoS). The short QoS increases a jobs priority so it will run as soon as possible.
SLURM Specification |
---|
#SBATCH --qos=short |
Overall limitations of maximum job wall time. CPUs, etc. are set for all jobs with the default setting (when thea “–qos=” section is omitted) and “short” jobs (described as above) on Swan. The limitations are shown in the following form.
SLURM Specification | Max Job Run Time | Max CPUs per User | Max Jobs per User | |
---|---|---|---|---|
Default | Leave blank | 7 days | 2000 | 1000 |
Short | #SBATCH –qos=short | 6 hours | 16 | 2 |
Please also note that the memory and local hard drive limits are subject to the physical limitations of the nodes, described in the resources capabilities section of the HCC Documentation and the partition sections above.
Partitions marked as owned by a group means only specific groups are allowed to submit jobs to that partition. Groups are manually added to the list allowed to submit jobs to the partition. If you are unable to submit jobs to a partition, and you feel that you should be, please contact hcc-support@unl.edu.
To submit jobs to an owned partition, use the SLURM --partition
option. Jobs
can either be submitted only to an owned partition, or to both the owned
partition and the general access queue. For example, assuming a partition
named mypartition
:
#SBATCH --partition=mypartition
Submitting solely to an owned partition means jobs will start immediately until the resources on the partition are full, then queue until prior jobs finish and resources become available.
#SBATCH --partition=mypartition,batch
Submitting to both an owned partition and batch
means jobs will run on both the owned
partition and the general batch queue. Jobs will start immediately until the resources
on the partition are full, then queue. Pending jobs will then start either on the owned partition
or in the general queue, wherever resources become available first
(taking into account FairShare). Unless there are specific reasons to limit jobs
to owned resources, this method is recommended to maximize job throughput.
The guest
partition can be used by users and groups that do not own
dedicated resources on Swan. Jobs running in the guest
partition
will run on the owned resources with Intel OPA interconnect. The jobs
are preempted when the resources are needed by the resource owners and
are restarted on another node.
We have put Anvil nodes which are not running Openstack in this partition. They have Intel Xeon E5-2650 v3 2.30GHz 2 CPU/20 cores and 256GB memory per node. However, they don’t have Infiniband or OPA interconnect. They are suitable for serial or single node parallel jobs. The nodes in this partition are subjected to be drained and move to our Openstack cloud when more cloud resources are needed without notice in advance.