Nodes can be assigned three types of location information based on partitions, racks, and queues.
The first form of location assignment, the partition, allows nodes to be grouped according to physical resource constraints or policy needs. By default, jobs are not allowed to span more than one partition so partition boundaries are often valuable if a underlying network topology make certain resource allocations undesirable. Additionally, per-partition policies can be specified to grant control over how scheduling is handled on a partition by partition basis. See the Partition Overview for more information.
Rack-based location information is orthogonal to the partition based configuration and is mainly an organizational construct. In general rack based location usage, a node is assigned both a rack and a slot number. This approach has descended from the IBM SP2 organizational approach in which a rack can contain any number of slots but typically contains between 1 and 64. Using the rack and slot number combo, individual compute nodes can be grouped and displayed in a more ordered manner in certain Moab commands (i.e., showstate). Currently, rack information can only be specified directly by the system via the SDR interface on SP2/Loadleveler systems. In all other systems, this information must be specified using an information service or specified manually using the RACK, SLOT, and SIZE attributes of the NODECFG parameter.
Sites may arbitrarily assign nodes to racks and rack slots without impacting scheduling behavior. Neither rack numbers nor rack slot numbers need to be contiguous and their use is simply for convenience purposes in displaying and analyzing compute resources. |
Example:
NODECFG[node024] RACK=1 SLOT=1 NODECFG[node025] RACK=1 SLOT=2 NODECFG[node026] RACK=2 SLOT=1 PARTITION=special ...
When specifying node and rack information, slot values must be in the range of 1 to 64, and racks must be in the range of 1 to 400.
Some resource managers allow queues (or classes) to be defined and then associated with a subset of available compute resources. With systems such as Loadleveler or PBSPro these queue to node mappings are automatically detected. On resource managers that do not provide this service, Moab provides alternative mechanisms for enabling this feature.
Under TORQUE, queue to node mapping can be accomplished by using the qmgr command to set the queue acl_hosts parameter to the mapping hostlist desired. Further, the acl_host_enable parameter should be set to False.
Setting acl_hosts and then setting acl_host_enable to True constrains the list of hosts from which jobs may be submitted to the queue. |
The following example highlights this process and maps the queue debug to the nodes host14 through host17.
> qmgr Max open servers: 4 Qmgr: set queue debug acl_hosts = "host14,host15,host16,host17" Qmgr: set queue debug acl_host_enable = false Qmgr: quit
All queues that do not have acl_hosts specified are global; that is, they show up on every node. To constrain these queues to a subset of nodes, each queue requires its own acl_hosts parameter setting. |
When selecting or specifying nodes either via command line tools or via configuration file based lists, Moab offers three types of node expressions that can be based on node lists, exact lists, node ranges, or regular expressions.
Node Lists
Node lists can be specified as one or more comma or whitespace delimited node IDs. Specified node IDs can be based on either short or fully qualified hostnames. Each element will be interpreted as a regular expression.
SRCFG[basic] HOSTLIST=cl37.icluster,ax45,ax46 ...
Exact Lists
When Moab receives a list of nodes it will, by default, interpret each element as a regular expression. To disable this and have each element interpreted as a string node name, the l: can be used as in the following example:
> setres l:n00,n01,n02
Node Range
Node lists can be specified as one or more comma or whitespace delimited node ranges. Each node range can be based using either <STARTINDEX>-<ENDINDEX> or <HEADER>[<STARTINDEX>-<ENDINDEX>] format. To explicitly request a range, the node expression must be preceded with the string r: as in the following example:
> setres r:37-472,513,516-855
CLASSCFG[long] HOSTLIST=r:anc-b[37-472]
Only one expression is allowed with node ranges. |
By default, Moab attempts to extract a node's node index assuming this information is built into the node's naming convention. If needed, this information can be explicitly specified in the Moab configuration file using NODECFG's NODEINDEX attribute, or it can be extracted from alternately formatted node IDs by specifying the NODEIDFORMAT parameter. |
Node Regular Expression
Node lists may also be specified as one or more comma or whitespace delimited regular expressions. Each node regular expression must be specified in a format acceptable by the standard C regular expression libraries that allow support for wildcard and other special characters such as the following:
Node lists are by default interpreted as a regular expression but can also be explicitly requested with the string x: as in the following examples:
# select nodes cl30 thru cl55 SRCFG[basic] HOSTLIST=x:cl[34],cl5[0-5] ...
# select nodes cl30 thru cl55 SRCFG[basic] HOSTLIST=cl[34],cl5[0-5] ...
To control node selection search ordering, set the OBJECTELIST parameter to one of the following options: exact, range, regex, rangere, or rerange. |
Copyright © 2012 Adaptive Computing Enterprises, Inc.®