Moab Adaptive Computing Suite Administrator's Guide 5.4

Resource Manager Extensions

All resource managers are not created equal. There is a wide range in what capabilities are available from system to system. Additionally, there is a large body of functionality that many, if not all, resource managers have no concept of. A good example of this is job QoS. Since most resource managers do not have a concept of quality of service, they do not provide a mechanism for users to specify this information. In many cases, Moab is able to add capabilities at a global level. However, a number of features require a per job specification. Resource manager extensions allow this information to be associated with the job.

Resource Manager Extension Specification

Specifying resource manager extensions varies by resource manager. TORQUE, OpenPBS, PBSPro, Loadleveler, LSF, S3, and Wiki each allow the specification of an extension field as described in the following table:

Resource Manager Specification Method Example
-l
> qsub -l nodes=3,qos=high sleepy.cmd
-W x=
> qsub -l nodes=3 -W x=qos:high sleepy.cmd
Note OpenPBS does not support this ability by default but can be patched as described in the PBS Resource Manager Extension Overview.
#@comment
#@nodes = 3
#@comment = qos:high
-ext
> bsub -ext advres:system.2
-l
> qsub -l advres=system.2
Note Use of PBSPro resources requires configuring the server_priv/resourcedef file to define the needed extensions as in the following example:

advres type=string
qos    type=string
sid    type=string
sjid   type=string
comment
comment=qos:high

Resource Manager Extension Values

Using the resource manager specific method, the following job extensions are currently available:

[<RSVID>]
---
Specifies that reserved resources are required to run the job. If <RSVID> is specified, then only resources within the specified reservation may be allocated. (See Job to Reservation Binding.)
> qsub -l advres=grid.3
   
<DOUBLE> (in MB/s)
---
Minimum available network bandwidth across allocated resources. (See Network Management.)
> bsub -ext bandwidth=120 chemjob.txt
   
<INTEGER>
0
Dedicated disk per task in MB.
qsub -l ddisk=2000
   
[[[DD:]HH:]MM:]SS
---
Relative completion deadline of job (from job submission time).
> qsub -l deadline=2:00:00,nodes=4 /tmp/bio3.cmd
   
[<DEPENDTYPE>:][{jobname|jobid}.]<ID>[:[{jobname|jobid}.]<ID>]...
---
Allows specification of job dependencies for compute or system jobs. If no ID prefix (jobname or jobid) is specified, the ID value is interpreted as a job ID.
# submit job which will run after job 1301 and 1304 complete
> msub -l depend=orion.1301:orion.1304 test.cmd

orion.1322

# submit jobname-based dependency job
> msub -l depend=jobname.data1005 dataetl.cmd

orion.1428
   
<INTEGER>
0
Dedicated memory per task in MB.
msub -l DMEM=512
<FEATURE>[{:|}<FEATURE>]...
---
Required list of node attribute/node features.
Note If the pipe (|) character is used as a delimiter, the features are logically OR'd together and the associated job may use resources that match any of the specified features
> qsub -l feature='fastos:bigio' testjob.cmd
<STRING>
---
Generic job attribute associated with job.
> qsub -l gattr=bigjob
{ <TASKID>[,<TASKID>]... }[,{ <TASKID>[,<TASKID>]... }]...
---
Explicitly specified task geometry.
> qsub -l nodes=2:ppn=4 -W x=geometry:'{0,1,4,5},{2,3,6,7}' quanta2.cmd
generic metric requirement for allocated nodes where the requirement is specified using the format <GMNAME>[:{lt:,le:,eq:,ge:,gt:,ne:}<VALUE>]
---
Indicates generic constraints that must be found on all allocated nodes. If a <VALUE> is not specified, the node must simply possess the generic metric. (See Generic Metrics for more information.)
> qsub -l gmetric=bioversion:ge:133244 testj.txt
   
comma delimited list of generic resources where each resource is specified using the format <RESTYPE>[{+|:}<COUNT>][@<TIMEFRAME>]
---
Indicates generic resources required by the job on a per task basis. If a <COUNT> is not specified, the resource count defaults to 1. If the <TIMEFRAME> is specified, the generic resource is consumed from the start of the job until <TIMEFRAME> expires; otherwise the resource is consumed during the entire life of the job.
> qsub -W x=GRES:tape+2,matlab+3@2:00 testj.txt

Note: When specifying more than 1 generic resource with -l the '%' character must be used to deliminate them.

> qsub -l gres=tape+2%matlab+3 testj.txt
> qsub -l software=matlab:2 testj.txt
   
'+' delimited list of hostnames
---
Indicates an exact set, superset, or subset of nodes on which the job must run.
Note Use the carot (^) or asterisk (*) characters to specify a host list as superset or subset respectively
> msub -l hostlist=nodeA+nodeB+nodeE
   
<JOBGROUPID>
---
ID of job group to which this job belongs (different from the GID of the user running the job).
> msub -l JGROUP=bluegroup
   
one or more of the following colon delimited job flags including ADVRES[:RSVID], NOQUEUE, NORMSTART, PREEMPTEE, PREEMPTOR, RESTARTABLE, SUSPENDABLE or COALLOC (see job flag overview for a complete listing)
---
Associates various flags with the job.
> qsub -l nodes=1,walltime=3600,jobflags=advres myjob.py
   
<DOUBLE> (in microseconds)
---
Maximum average network latency across allocated resources. (See Network Management.)
> qsub -l latency=2.5 hibw.cmd
   
<INTEGER>
---
Per job log verbosity.
> qsub -l -W x=loglevel:5 bw.cmd
Job events and analysis will be logged with level 5 verbosity.
   
<INTEGER> (in megabytes)
---
Maximum amount of memory the job may consume across all tasks before the JOBMEM action is taken.
> qsub -W x=MAXMEM:1000mb bw.cmd
If a RESOURCELIMITPOLICY is set for per-job memory utilization, its action will be taken when this value is reached.
   
<INTEGER>
---
Maximum CPU load the job may consume across all tasks before the JOBPROC action is taken.
> qsub -W x=MAXPROC:4 bw.cmd
If a RESOURCELIMITPOLICY is set for per-job processor utilization, its action will be taken when this value is reached.
   
[[DD:]HH:]MM:]SS
---
Minimum time job must run before being eligible for preemption.

Note Can only be specified if associated QoS allows per-job preemption configuration by setting the preemptconfig flag.
> qsub -l minpreempttime=900 bw.cmd
Job cannot be preempted until it has run for 15 minutes.
   
<INTEGER>
0
Minimum processor speed (in MHz) for every node that this job will run on.
> qsub -W x=MINPROCSPEED:2000 bw.cmd
Every node that runs this job must have a processor speed of at least 2000 MHz.
   
[[DD:]HH:]MM:]SS
1:00:00
Minimum wallclock limit job must run before being eligible for extension. (See JOBEXTENDDURATION.)
> qsub -l minwclimit=300,walltime=16000 bw.cmd
Job will run for at least 300 seconds but up to 16,000 seconds if possible (without interfering with other jobs).
[<SRCURL>[|<SRCRUL>...],]<DSTURL>
---
Indicates whether a job has data staging requirements. If more than one source URL is specified, the destination URL must be a directory.

The format of <SRCURL> is:
[PROTO://][HOST][:PORT]][/PATH]
where the path is local.

The format of <DSTURL> is:
[PROTO://][HOST][:PORT]][/PATH]
where the path is remote.

PROTO can be any of the following protocols: ssh, file, or gsiftp.
HOST is the name of the host where the file resides.
PATH is the path of the source or destination file. The destination path may be a directory when sending a single file and must be a directory when sending multiple files. If a directory is specified, it must end with a forward slash (/).

Valid variables include:
$JOBID
$HOME
$RHOME
$SUBMITHOST
$DEST
$LOCALDATASTAGEHEAD
Note If no destination is given, the protocol and file name will be set to the same as the source.
Note .
> msub -W x='mstagein=file://$HOME/test1.sh|file:///home/dev/test2.sh,ssh://host/home/dev/' script.sh
Copy test1.sh and test2.sh from the local machine to /home/dev/ on host.
   
[<SRCURL>[|<SRCRUL>...],]<DSTURL>
---
Indicates whether a job has data staging requirements. If more than one source URL is specified, the destination URL must be a directory.

The format of <SRCURL> is:
[PROTO://][HOST][:PORT]][/PATH]
where the path is remote.

The format of <DSTURL> is:
[PROTO://][HOST][:PORT]][/PATH]
where the path is local.

PROTO can be any of the following protocols: ssh, file, or gsiftp.
HOST is the name of the host where the file resides.
PATH is the path of the source or destination file. The destination path may be a directory when sending a single file and must be a directory when sending multiple files. If a directory is specified, it must end with a forward slash (/).

Valid variables include:
$JOBID
$HOME
$RHOME
$SUBMITHOST
$DEST
$LOCALDATASTAGEHEAD
Note If no destination is given, the protocol and file name will be set to the same as the source.
Note .
> msub -W x='mstageout=ssh://$DEST/$HOME/test1.sh|ssh://host/home/dev/test2.sh,ssh:///home/dev/' script.sh
Copy test1.sh and test2.sh from the remote machine, host, to /home/dev/ on the local machine.
   
one of SHARED, SINGLEJOB, SINGLETASK, SINGLEUSER, or UNIQUEUSER
---
Specifies how node resources should be accessed. (See Node Access Policies for more information).

Note: The naccesspolicy option can only be used to make node access more constraining than is specified by the system, partition, or node policies. If the effective node access policy is shared, naccesspolicy can be set to singleuser, if the effective node access policy is singlejob, naccesspolicy can be set to singletask.
> qsub -l naccesspolicy=singleuser bw.cmd

> bsub -ext naccesspolicy=singleuser lancer.cmd
Job can only allocate free nodes or nodes running jobs by same user.
   
one of the valid settings for the parameter NODEALLOCATIONPOLICY
---
Specifies how node resources should be selected and allocated to the job. (See Node Allocation Policies for more information.)
> qsub -l nallocpolicy=minresource bw.cmd
Job should use the minresource node allocation policy.
   
one of the valid settings for the parameter JOBNODEMATCHPOLICY
---
Specifies how node resources should be selected and allocated to the job.
> qsub -l nodes=2 -W x=nmatchpolicy:exactnode bw.cmd
Job should use the EXACTNODE JOBNODEMATCHPOLICY.
   
<BOOLEAN>
FALSE
Specifies that the requested nodecount should be treated as a requested node equivalency amount.
> qsub -l nodes=2000 -W x=NODESCALING:TRUE
Job will run on the equivalent of 2000 nodes (meaning the job may run on fewer nodes if the nodes are faster and vice versa).
   
<SETTYPE>:<SETATTR>[:<SETLIST>]
---
Specifies nodeset constraints for job resource allocation. (See the NodeSet Overview for more information.)
> qsub -l nodeset=ONEOF:PROCSPEED:350:400:450 bw.cmd
   
[[DD:]HH:]MM:]SS
---
The maximum delay allowed when scheduling a job constrained by NODESETS until Moab discards the NODESET request and schedules the job normally.
> qsub -l nodesetdelay=300,walltime=16000 bw.cmd
   
NODESETISOPTIONAL
<BOOLEAN>
---
Specifies whether the nodeset constraint is optional. (See the NodeSet Overview for more information.)

Note Requires SCHEDCFG[] FLAGS=allowperjobnodesetisoptional

> msub -l nodesetisoptional=true bw.cmd
   
<OperatingSystem>
---
Specifies the job's required operating system.
> qsub -l nodes=1,opsys=rh73 chem92.cmd
   
<STRING>[{,|:}<STRING>]...
---
Specifies the partition (or partitions) in which the job must run.

Note The job must have access to this partition based on system wide or credential based partition access lists
> qsub -l nodes=1,partition=math:geology
The job must only run in the math partition or the geology partition.
   
[{feature|variable}:]<STRING>[:<STRING>]...
Note If feature or variable are not specified, then feature is assumed
---
Specifies which node features are preferred by the job and should be allocated if available. If preferred node criteria are specified, Moab favors the allocation of matching resources but is not bound to only consider these resources.

Note Preferences are not honored unless the node allocation policy is set to PRIORITY and the PREF priority component is set within the node's PRIORITYF attribute.
> qsub -l nodes=1,pref=bigmem

The job may run on any nodes but prefers to allocate nodes with the bigmem feature.
   
<STRING>
---
Requests the specified QoS for the job.
> qsub -l walltime=1000,qos=highprio biojob.cmd
   

<BOOLEAN>

TRUE
 Indicates whether or not the scheduler should queue the job if resources are not available to run the job immediately
msub -l nodes=1,queuejob=false test.cmd
   
Required node attributes with version number support: <ATTRIBUTE>[{>=|>|<=|<|=}<VERSION>]
---
Indicates required node attributes.
> qsub -l reqattr=matlab=7.1 testj.txt
   
one of CANCEL, HOLD, IGNORE, NOTIFY, or REQUEUE
---
Specifies the action to take on an executing job if one or more allocated nodes fail. This setting overrides the global value specified with the NODEALLOCRESFAILUREPOLICY parameter.
resfailpolicy=ignore
For this particular job, ignore node failures.
   
<STRING>
---
One of the resource manager types currently available within the cluster or grid. Typically, this is one of PBS, LSF, LL, SGE, SLURM, BProc, and so forth.
rmtype=ll
Only run job on a Loadleveler destination resource manager.
   
<INTEGER>[@<OFFSET>]
---
Specifies the pre-termination signal to be sent to a job prior to it reaching its walltime limit or being terminated by Moab. The optional offset value specifies how long before job termination the signal should be sent. By default, the pre-termination signal is sent one minute before a job is terminated
> msub -l signal=32@120 bio45.cmd
   
<INTEGER>
0
Allows Moab administrators to set a system priority on a job. (similar to setspri)
> qsub -l nodes=16,spriority=100 job.cmd
   
RR or PACK
---
Allows users to specify task distribution policies on a per job basis. (See Task Distribution Overview)
> qsub -l nodes=16,taskdistpolicy=rr job.cmd
   
<STRING>
---
Specifies a job template to be used as a set template. The requested template must have SELECT=TRUE (See Job Templates.)
> msub -l walltime=1000,nodes=16,template=biojob job.cmd
   
<TIMESPEC>
0
Specifies the time at which Moab should cancel a queued or active job. (See Job Deadline Support.)
> msub -l nodes=10,walltime=600,termtime=12:00_Jun/14 job.cmd
   
<INTEGER>[+]
0
Tasks per node allowed on allocated hosts. If the plus (+) character is specified, the tasks per node value is interpreted as a minimum tasks per node constraint; otherwise it is interpreted as an exact tasks per node constraint.

Note on Differences between TPN and PPN:

There are two key differences between the following: (A) qsub -l nodes=12:ppn=3 and (B) qsub -l nodes=12,tpn=3

The first difference is that ppn is interpreted as the minimum required tasks per node while tpn defaults to exact tasks per node; case (B) executes the job with exactly 3 tasks on each allocated node while case (A) executes the job with at least 3 tasks on each allocated node—nodeA:4,nodeB:3,nodeC:5

The second major difference is that the line, nodes=X:ppn=Y actually requests X*Y tasks, whereas nodes=X,tpn=Y requests only X tasks.

> msub -l nodes=10,walltime=600,tpn=4 job.cmd
   
<TRIGSPEC>
---
Adds trigger(s) to the job. (See the Trigger Specification Page for specific syntax.)

Note Job triggers can only be specified if allowed by the QoS flag trigger.
> qsub -l trig=start:exec@/tmp/email.sh job.cmd
   
<INTEGER>[@<INTEGER>][:<INTEGER>[@<INTEGER>]]...
0
Specifies alternate task requests with their optional walltimes. (See Malleable Jobs.)
> msub -l trl=2@500:4@250:8@125:16@62 job.cmd

or
> qsub -l trl=2:3:4
   
<INTEGER>-<INTEGER>
0
Specifies a range of task requests that require the same walltime. (See Malleable Jobs.)
> msub -l trl=32-64 job.cmd
Note For optimization purposes Moab does not perform an exhaustive search of all possible values but will at least do the beginning, the end, and 4 equally distributed choices in between
   
<INTEGER>
0
Total tasks allowed across the number of hosts requested. TTC is supported in the Wiki resource manager for SLURM. Compressed output must be enabled in the moab.cfg file. (See SLURMFLAGS for more information). NODEACCESSPOLICY should be set to SINGLEJOB and JOBNODEMATCHPOLICY should be set to EXACTNODE in the moab.cfg file.
> msub -l nodes=10,walltime=600,ttc=20 job.cmd
Note In this example, assuming all the nodes are 8 processor nodes, the first allocated node will have 10 tasks, the next node will have 2 tasks, and the remaining 8 nodes will have 1 task each for a total task count of 20 tasks
   
<ATTR>:<VALUE>
---
Adds a generic variable to the job.
VAR=applicationtype:blast

Resource Manager Extension Examples

If more than one extension is required in a given job, extensions can be concatenated with a semicolon separator using the format <ATTR>:<VALUE>[;<ATTR>:<VALUE>]...

Example 1

#@comment="HOSTLIST:node1,node2;QOS:special;SID:silverA"

Job must run on nodes node1 and node2 using the QoS special. The job is also associated with the system ID silverA allowing the silver daemon to monitor and control the job.

Example 2

# PBS -W x=\"NODESET:ONEOF:NETWORK;DMEM:64\"

Job will have resources allocated subject to network based nodeset constraints. Further, each task will dedicate 64 MB of memory.

Example 3

>  qsub -l nodes=4,walltime=1:00:00 -W x="FLAGS:ADVRES:john.1"

Job will be forced to run within the john.1 reservation.

See Also