(Click to open topic with navigation)
Various resources can be requested at the time of job submission. A job can request a particular node, a particular node attribute, or even a number of nodes with particular attributes. Either native TORQUE resources, or external scheduler resource extensions may be specified. The native TORQUE resources are listed in the following table:
|arch||string||Specifies the administrator defined system architecture required. This defaults to whatever the PBS_MACH string is set to in "local.mk".|
|cput||seconds, or [[HH:]MM;]SS||Maximum amount of CPU time used by all processes in the job.|
Specifies a user owned epilogue script which will be run before the system epilogue and epilogue.user scripts at the completion of a job. The syntax is epilogue=<file>. The file can be designated with an absolute or relative path.
For more information, see Prologue and epilogue scripts.
|file||size||The amount of total disk requested for the job. (Ignored on Unicos.)|
|host||string||Name of the host on which the job should be run. This resource is provided for use by the site's scheduling policy. The allowable values and effect on job placement is site dependent.|
|mem||size||Maximum amount of physical memory used by the job. (Ignored on Darwin, Digital Unix, Free BSD, HPUX 11, IRIX, NetBSD, and SunOS. Also ignored on Linux if number of nodes is not 1. Not implemented on AIX and HPUX 10.)|
|nice||integer||Number between -20 (highest priority) and 19 (lowest priority). Adjust the process execution priority.|
[:<property>[:<property>]...] [+ ...]
Number and/or type of nodes to be reserved for exclusive use by the job. The value is one or more node_specs joined with the + (plus) character: node_spec[+node_spec...]. Each node_spec is a number of nodes required of the type declared in the node_spec and a name of one or more properties desired for the nodes. The number, the name, and each property in the node_spec are separated by a : (colon). If no number is specified, one (1) is assumed. The name of a node is its hostname. The properties of nodes are:
The number of virtual processors available on a node by default is 1, but it can be configured in the $TORQUE_HOME/server_priv/nodes file using the np attribute (see Server node file configuration). The virtual processor can relate to a physical core on the node or it can be interpreted as an "execution slot" such as on sites that set the node np value greater than the number of physical cores (or hyper-thread contexts). The ppn value is a characteristic of the hardware, system, and site, and its value is to be determined by the administrator.
The number of GPUs available on a node can be configured in the $TORQUE_HOME/server_priv/nodes file using the gpu attribute (see Server node file configuration). The GPU value is a characteristic of the hardware, system, and site, and its value is to be determined by the administrator.
See qsub -l nodes for examples.
By default, the node resource is mapped to a virtual node (that is, directly to a processor, not a full physical compute node). This behavior can be changed within Maui or Moab by setting the JOBNODEMATCHPOLICY parameter. See "Appendix F: Parameters" of the Moab Workload Manager Administrator Guide for more information.
|opsys||string||Specifies the administrator defined operating system as defined in the MOM configuration file.|
Allows a user to specify site specific information. This resource is provided for use by the site's scheduling policy. The allowable values and effect on job placement is site dependent.
This does not work for msub using Moab and Maui.
|pcput||seconds, or [[HH:]MM:]SS||Maximum amount of CPU time used by any single process in the job.|
|pmem||size||Maximum amount of physical memory used by any single process of the job. (Ignored on Fujitsu. Not implemented on Digital Unix and HPUX.)|
(Applicable in version 2.5.0 and later.) The number of processors to be allocated to a job. The processors can come from one or more qualified node(s). Only one procs declaration may be used per submitted qsub command.
> qsub -l nodes=3 -1 procs=2
A string made up of 1's and 0's in reverse order of the processor cores requested. A procs_bitmap=1110 means the job requests a node that has four available cores, but the job runs exclusively on cores two, three, and four. With this bitmap, core one is not used.
For more information, see Scheduling cores.
Specifies a user owned prologue script which will be run after the system prologue and prologue.user scripts at the beginning of a job. The syntax is prologue=<file>. The file can be designated with an absolute or relative path.
For more information, see Prologue and epilogue scripts.
|pvmem||size||Maximum amount of virtual memory used by any single process in the job. (Ignored on Unicos.)|
For TORQUE, this resource has no meaning. It is passed on to the scheduler for interpretation. In the Moab scheduler, the size resource is intended for use in Cray installations only.
|software||string||Allows a user to specify software required by the job. This is useful if certain software packages are only available on certain systems in the site. This resource is provided for use by the site's scheduling policy. The allowable values and effect on job placement is site dependent. (See "Scheduler License Manager" in the Moab Workload Manager Administrator Guide for more information.)|
|vmem||size||Maximum amount of virtual memory used by all concurrent processes in the job. (Ignored on Unicos.)|
|walltime||seconds, or [[HH:]MM:]SS||Maximum amount of real time during which the job can be in the running state.|
The size format specifies the maximum amount in terms of bytes or words. It is expressed in the form integer[suffix]. The suffix is a multiplier defined in the following table ("b" means bytes [the default] and "w" means words). The size of a word is calculated on the execution server as its word size.
Example 2-1: qsub -l nodes
|> qsub -l nodes=12||Request 12 nodes of any type|
|> qsub -l nodes=2:server+14||Request 2 "server" nodes and 14 other nodes (a total of 16) - this specifies two node_specs, "2:server" and "14"|
|> qsub -l nodes=server:hippi+10:noserver+3:bigmem:hippi||Request (a) 1 node that is a "server" and has a "hippi" interface, (b) 10 nodes that are not servers, and (c) 3 nodes that have a large amount of memory and have hippi|
|> qsub -l nodes=b2005+b1803+b1813||Request 3 specific nodes by hostname|
|> qsub -l nodes=4:ppn=2||Request 2 processors on each of four nodes|
|> qsub -l nodes=1:ppn=4||Request 4 processors on one node|
|> qsub -l nodes=2:blue:ppn=2+red:ppn=3+b1014||Request 2 processors on each of two blue nodes, three processors on one red node, and the compute node "b1014"|
This job requests a node with 200MB of available memory:
> qsub -l mem=200mb /home/user/script.sh
This job will wait until node01 is free with 200MB of available memory:
> qsub -l nodes=node01,mem=200mb /home/user/script.sh