Moab Workload Manager

5.3 Node Access Policies

Moab allocates resources to jobs on the basis of a job task—an atomic collection of resources that must be co-located on a single compute node. A given job may request 20 tasks where each task is defined as one processor and 128 MB of RAM. Compute nodes with multiple processors often possess enough resources to support more than one task simultaneously. When it is possible for more than one task to run on a node, node access policies determine which tasks may share the compute node's resources.

Moab supports a distinct number of node access policies that are listed in the following table:

Policy Description
Tasks from any combination of jobs may use available resources.
Only jobs requesting shared node access may use available resources.
Tasks from a single job may use available resources.
A single task from a single job may run on the node.
Tasks from any jobs owned by the same user may use available resources.
Any number of tasks from a single job may allocate resources from a node but only if the user has no other jobs running on that node.

Note This policy is useful in environments where job epilog/prologs scripts are used to clean up processes based on userid.

5.3.1 Configuring Node Access Policies

The global node access polices may be specified via the parameter NODEACCESSPOLICY. This global default may be overridden on a per node basis with the ACCESS attribute of the NODECFG parameter or on a per job basis using the resource manager extension NACCESSPOLICY. Finally, a per queue node access policy may also be specified by setting either the NODEACCESSPOLICY or FORCENODEACCESSPOLICY attributes of the CLASSCFG parameter. FORCENODEACCESSPOLICY overrides any per job specification in all cases, whereas NODEACCESSPOLICY is overridden by per job specification.

By default, nodes are accessible using the setting of the system wide NODEACCESSPOLICY parameter unless a specific ACCESS policy is specified on a per node basis using the NODECFG parameter. Jobs may override this policy and subsequent jobs are bound to conform to the access policies of all jobs currently running on a given node. For example, if the NODEACCESSPOLICY parameter is set to SHARED, a new job may be launched on an idle node with a job specific access policy of SINGLEUSER. While this job runs, the effective node access policy changes to SINGLEUSER and subsequent job tasks may only be launched on this node provided they are submitted by the same user. When all single user jobs have completed on that node, the effective node access policy reverts back to SHARED and the node can again be used in SHARED mode.

Example

To set a global policy of SINGLETASK on all nodes except nodes 13 and 14, use the following:

# by default, enforce dedicated node access on all nodes
NODEACCESSPOLICY  SINGLETASK

# allow nodes 13 and 14 to be shared
NODECFG[node13]   ACCESS=SHARED
NODECFG[node14]   ACCESS=SHARED

See Also