(Click to open topic with navigation)
Moab supports a distinct number of node access policies that are listed in the following table:
|SHARED||Tasks from any combination of jobs may use available resources.|
|SHAREDONLY||Only jobs requesting shared node access may use available resources.|
Tasks from any jobs owned by the same account may use available resources.
Tasks from any jobs owned by the same class may use available resources.
|SINGLEGROUP||Tasks from any jobs owned by the same group may use available resources.|
|SINGLEJOB||Only tasks from a single job may use the node's resources.
When enforcing limits using CLASSCFG attributes, use MAX.NODE instead of MAX.PROC. MAX.PROC enforces the requested processors, not the actual processors dedicated to the job.
|SINGLETASK||Only a single task from a single job may run on the node.|
|SINGLEUSER||Tasks from any jobs owned by the same user may use available resources.|
|UNIQUEUSER||Any number of tasks from a single job may allocate resources from a node but only if the user has no other jobs running on that node.
UNIQUEUSER limits the number of jobs a single user can run on a node, allowing other users to run jobs with the remaining resources.
This policy is useful in environments where job epilog/prologs scripts are used to clean up processes based on userid.
4.9.1 Configuring Node Access Policies
The global node access polices may be specified via the parameter NODEACCESSPOLICY. This global default may be overridden on a per node basis with the ACCESS attribute of the NODECFG parameter or on a per job basis using the resource manager extension NACCESSPOLICY. Finally, a per queue node access policy may also be specified by setting either the NODEACCESSPOLICY or FORCENODEACCESSPOLICY attributes of the CLASSCFG parameter. FORCENODEACCESSPOLICY overrides any per job specification in all cases, whereas NODEACCESSPOLICY is overridden by per job specification.
When multiple node access policies apply to a given job or node (for example SINGLEJOB is configured globally but the class is configured as SHARED) then the more restrictive policy applies. The most restrictive policy is SINGLETASK, followed by SINGLEJOB, the single credentials, and SHARED being the least restrictive.
By default, nodes are accessible using the setting of the system wide NODEACCESSPOLICY parameter unless a specific ACCESS policy is specified on a per node basis using the NODECFG parameter. Jobs may override this policy and subsequent jobs are bound to conform to the access policies of all jobs currently running on a given node. For example, if the NODEACCESSPOLICY parameter is set to SHARED, a new job may be launched on an idle node with a job specific access policy of SINGLEUSER. While this job runs, the effective node access policy changes to SINGLEUSER and subsequent job tasks may only be launched on this node provided they are submitted by the same user. When all single user jobs have completed on that node, the effective node access policy reverts back to SHARED and the node can again be used in SHARED mode.
For example, to set a global policy of SINGLETASK on all nodes except nodes 13 and 14, use the following:
# by default, enforce dedicated node access on all nodes NODEACCESSPOLICY SINGLETASK # allow nodes 13 and 14 to be shared NODECFG[node13] ACCESS=SHARED NODECFG[node14] ACCESS=SHARED
You can also set SINGLEJOB using the qsub node-exclusive option (-n). For example:
qsub -n jobscript.sh
This will set node_exlusive = True in the output for qstat -f <job Id>.
Alternately, you could also use either of the following:
qsub -l naccesspolicy=singlejob jobscript.sh qsub -W x=naccesspolicy:singlejob jobscript.sh