You are here: 4 Prioritizing Jobs and Allocating Resources > Node Allocation Policies

4.8 Node Allocation Policies

While job prioritization allows a site to determine which job to run, node allocation policies allow a site to specify how available resources should be allocated to each job. The algorithm used is specified by the parameter NODEALLOCATIONPOLICY. There are multiple node allocation policies to choose from allowing selection based on reservation constraints, node configuration, resource usage, prefed other factors. You can specify these policies with a system-wide default value, on a per-partition basis, or on a per-job basis. Please note that LASTAVAILABLE is the default policy.

Available algorithms are described in detail in the following sections and include FIRSTAVAILABLE, LASTAVAILABLE, PRIORITY, CPULOAD, MINRESOURCE, CONTIGUOUS, MAXBALANCE, PLUGIN.

4.8.1 Node Allocation Overview

Node allocation is the process of selecting the best resources to allocate to a job from a list of available resources. Making this decision intelligently is important in an environment that possesses one or more of the following attributes:

4.8.1.A Heterogeneous Resources

Moab analyzes job processing requirements and assigns resources to maximize hardware utility.

For example, suppose two nodes are available in a system, A and B. Node A has 768 MB of RAM and node B has 512 MB. The next two jobs in the queue are X and Y. Job X requests 256 MB and job Y requests 640 MB. Job X is next in the queue and can fit on either node, but Moab recognizes that job Y (640 MB) can only fit on node A (768 MB). Instead of putting job X on node A and blocking job Y, Moab can put job X on node B and job Y on node A.

4.8.1.B Shared Nodes

Symmetric Multiprocessing (SMP)

When sharing SMP-based compute resources amongst tasks from more than one job, resource contention and fragmentation issues arise. In SMP environments, the general goal is to deliver maximum system utilization for a combination of compute-intensive and memory-intensive jobs while preventing overcommitment of resources.

By default, most current systems do not do a good job of logically partitioning the resources (such as CPU, memory, and network bandwidth) available on a given node. Consequently contention often arises between tasks of independent jobs on the node. This can result in a slowdown for all jobs involved, which can have significant ramifications if large-way parallel jobs are involved. Virtualization, CPU sets, and other techniques are maturing quickly as methods to provide logical partitioning within shared resources.

On large-way SMP systems (> 32 processors/node), job packing can result in intra-node fragmentation. For example, take two nodes, A and B, each with 64 processors. Assume they are currently loaded with various jobs and A has 24 and B has 12 processors free. Two jobs are submitted; job X requests 10 processors and job Y requests 20 processors. Job X can start on either node but starting it on node A prevents job Y from running. An algorithm to handle intra-node fragmentation is straightforward for a single resource case, but the algorithm becomes more involved when jobs request a combination of processors, memory, and local disk. These workload factors should be considered when selecting a site's node allocation policy as well as identifying appropriate policies for handling resource utilization limit violations.

Interactive Nodes

In many cases, sites are interested in allowing multiple users to simultaneously use one or more nodes for interactive purposes. Workload is commonly not compute intensive consisting of intermittent tasks including coding, compiling, and testing. Because these jobs are highly variant in terms of resource usage over time, sites are able to pack a larger number of these jobs onto the same node. Consequently, a common practice is to restrict job scheduling based on utilized, rather than dedicated resources.

Interactive Node Example

The example configuration files that follow show one method by which node sharing can be accomplished within a Torque + Moab environment. This example is based on a hypothetical cluster composed of 4 nodes each with 4 cores. For the compute nodes, job tasks are limited to actual cores preventing overcommitment of resources. For the interactive nodes, up to 32 job tasks are allowed, but the node also stops allowing additional tasks if either memory is fully utilized or if the CPU load exceeds 4.0. Thus, Moab continues packing the interactive nodes with jobs until carrying capacity is reached.

Example 4-3: /opt/moab/etc/moab.cfg

# constrain interactive jobs to interactive nodes
# constrain interactive jobs to 900 proc-seconds
CLASSCFG[interactive]  HOSTLIST=interactive01,interactive02
CLASSCFG[interactive]  MAX.CPUTIME=900
RESOURCELIMITPOLICY    CPUTIME:ALWAYS:CANCEL
# base interactive node allocation on load and jobs
NODEALLOCATIONPOLICY  PRIORITY
NODECFG[interactive01] PRIORITYF='-20*LOAD - JOBCOUNT'
NODECFG[interactive02] PRIORITYF='-20*LOAD - JOBCOUNT'

Example 4-4: /var/spool/torque/server_priv/nodes

interactive01 np=32
interactive02 np=32
compute01     np=4
compute02     np=4

Example 4-5: /var/spool/torque/mom_priv/config on "interactive01"

# interactive01
$max_load 4.0

Example 4-6: /var/spool/torque/mom_priv/config on "interactive02"

# interactive02
$max_load 4.0

4.8.1.C Reservations or Service Guarantees

A reservation-based system adds the time dimension into the node allocation decision. With reservations, node resources must be viewed in a type of two dimension node-time space. Allocating nodes to jobs fragments this node-time space and makes it more difficult to schedule jobs in the remaining, more constrained node-time slots. Allocation decisions should be made in such a way as to minimize this fragmentation and maximize the scheduler's ability to continue to start jobs in existing slots. The following figure shows that job A and job B are running. A reservation, X, is created some time in the future. Assume that job A is 2 hours long and job B is 3 hours long. Again, two new single-processor jobs are submitted, C and D; job C requires 3 hours of compute time while job D requires 5 hours. Either job will just fit in the free space located above job A or in the free space located below job B. If job C is placed above job A, job D, requiring 5 hours of time will be prevented from running by the presence of reservation X. However, if job C is placed below job B, job D can still start immediately above job A.

Image 4-1: Job A, Job B, and Reservation X scheduled on nodes

Click to enlarge

The preceding example demonstrates the importance of time based reservation information in making node allocation decisions, both at the time of starting jobs and at the time of creating reservations. The impact of time based issues grows significantly with the number of reservations in place on a given system. The LASTAVAILABLE algorithm works on this premise, locating resources that have the smallest space between the end of a job under consideration and the start of a future reservation.

4.8.1.D Non-flat Network

On systems where network connections do not resemble a flat all-to-all topology, task placement may impact performance of communication intensive parallel jobs. If latencies and network bandwidth between any two nodes vary significantly, the node allocation algorithm should attempt to pack tasks of a given job as close to each other as possible to minimize impact of bandwidth and latency differences.

4.8.2 Node selection factors

While the node allocation policy determines which nodes a job will use, other factors narrow the options before the policy makes the final decision. The following process demonstrates how Moab executes its node allocation process and how other policies affect the decision:

  1. Moab eliminates nodes that do not meet the hard resource requirements set by the job.
  2. Moab gathers affinity information, first from workload proximity rules and then from reservation affinity rules (See Affinity for more information.). Reservation affinity rules trump workload proximity rules.
  3. Moab allocates nodes using the allocation policy.
    • If more than enough nodes with Required affinity exist, only they are passed down for the final sort by the node allocation policy.
    • If the number of nodes with Required affinity matches the number of nodes requested exactly, then the node allocation policy is skipped entirely and all of those nodes are assigned to the job.
    • If too few nodes have Required affinity, all of them are assigned to the job, then the node allocation policy is applied to the remaining eligible nodes (after Required, Moab will use Positive, then Neutral, then Negative.).

4.8.3 Resource-Based Algorithms

Moab contains a number of allocation algorithms that address some of the needs described earlier. You can also create allocation algorithms and interface them with the Moab scheduling system. Each of these policies has a name and descriptive alias. They can be configured using either one, but Moab will only report their names.

If ENABLEHIGHTHROUGHPUT is TRUE, you must set NODEALLOCATIONPOLICY to FIRSTAVAILABLE.

The current suite of algorithms is described in what follows:

Allocation algorithm name Alias Description
CPULOAD ProcessorLoad Nodes are selected that have the maximum amount of available, unused CPU power (<#of CPU's> - <CPU load>). CPULOAD is a good algorithm for timesharing node systems and applies to jobs starting immediately. For the purpose of future reservations, the MINRESOURCE algorithm is used.
FIRSTAVAILABLE InReportedOrder Simple first come, first served algorithm where nodes are allocated in the order they are presented by the resource manager. This is a very simple, and very fast algorithm.
LASTAVAILABLE InReserveReportedOrder Nodes are allocated in descending order that they are presented by the resource manager, or the reverse of FIRSTAVAILABLE.
PRIORITY CustomPriority

Allows a site to specify the priority of various static and dynamic aspects of compute nodes and allocate them with preference for higher priority nodes. It is highly flexible allowing node attribute and usage information to be combined with reservation affinity. Using node allocation priority, you can specify the following priority components:

  • ADISK - Local disk currently available to batch jobs in MB.
  • AMEM - Real memory currently available to batch jobs in MB.
  • APROCS - Processors currently available to batch jobs on node (configured procs - dedicated procs).
  • ARCH[<ARCH>] - Processor architecture.
  • ASWAP - Virtual memory currently available to batch jobs in MB.
  • CDISK - Total local disk allocated for use by batch jobs in MB.
  • CMEM - Total real memory on node in MB.
  • COST - Based on node CHARGERATE.
  • CPROCS - Total processors on node.
  • CSWAP - Total virtual memory configured on node in MB.
  • FEATURE[<FNAME>] - Boolean; specified feature is present on node.
  • FREETIME - FREETIME is calculated as the time during which there is no reservation on the machine. It uses either the job wallclock limit (if there is a job), or 2 months. The more free time a node has within either the job wallclock limit or 2 months, the higher this value will be.
  • GMETRIC[<GMNAME>] - Current value of specified generic metric on node.
  • JOBCOUNT - Number of jobs currently running on node.
  • JOBFREETIME - The number of seconds that the node is idle between now and when the job is scheduled to start.
  • LOAD - Current 1 minute load average.
  • MTBF - Mean time between failures (in seconds).
  • NODEINDEX - Node's nodeindex as specified by the resource manager.
  • OS - True if job compute requirements match node operating system.
  • PARAPROCS - Processors currently available to batch jobs within partition (configured procs - dedicated procs).
  • POWER - TRUE if node is ON.
  • PREF - Boolean; node meets job specific resource preferences.
  • PRIORITY - Administrator specified node priority.
  • RANDOM - Per iteration random value between 0 and 1. (Allows introduction of random allocation factor.)

    Regardless of coefficient, the contribution of this weighted factor cannot exceed 32768.

    The coefficient, if any, of the RANDOM component must precede, not follow, the component in order to work correctly. For example:

    100 * RANDOM
  • SPEED - If set, node processor speed (procspeed); otherwise, relative node speed.
  • SUSPENDEDJCOUNT - Number of suspended jobs currently on the node.
  • USAGE - Percentage of time node has been running batch jobs since the last statistics initialization.
  • WINDOWTIME - The window of time between the end of one reservation and the beginning of another. This algorithm, given a negative value, can be used to pack reservations as close together on a node as possible.

The node allocation priority function can be specified on a node by node or cluster wide basis. In both cases, the recommended approach is to specify the PRIORITYF attribute with the NODECFG parameter. Some examples follow.

Example 1: Favor the fastest nodes with the most available memory that are running the fewest jobs

NODEALLOCATIONPOLICY PRIORITY
NODECFG[DEFAULT] PRIORITYF='SPEED + .01 * AMEM - 10 * JOBCOUNT'
...

If spaces are placed within the priority function for readability, the priority function value must be quoted to allow proper parsing.

Example 2: Favor the nodes with the least amount of idle time between now and the job's scheduled start time.

NODEALLOCATIONPOLICY PRIORITY
NODECFG[DEFAULT] PRIORITYF=-JOBFREETIME

Moab stacks jobs on the nodes that are busiest between now and the job's scheduled start time.

Example 3: A site has a batch system consisting of two dedicated " batchX" nodes, as well as numerous desktop systems. The allocation function should favor batch nodes first, followed by desktop systems that are the least loaded and have received the least historical usage.

NODEALLOCATIONPOLICY PRIORITY
NODECFG[DEFAULT] PRIORITYF='-LOAD - 5*USAGE'
NODECFG[batch1] PRIORITY=1000 PRIORITYF='PRIORITY + APROCS'
NODECFG[batch2] PRIORITY=1000 PRIORITYF='PRIORITY + APROCS'
...

Example 4: Pack tasks onto loaded nodes first.

NODEALLOCATIONPOLICY PRIORITY
NODECFG[DEFAULT] PRIORITYF=JOBCOUNT
...

Example 5: Pack tasks onto nodes with the most processors available and the lowest CPU temperature.

RMCFG[torque] TYPE=pbs
RMCFG[temp]   TYPE=NATIVE  CLUSTERQUERYURL=exec://$TOOLSDIR/hwmon.pl
NODEALLOCATIONPOLICY PRIORITY
NODECFG[DEFAULT] PRIORITYF='100*APROCS - GMETRIC[temp]'
...
MINRESOURCE MinimumConfiguredResources Prioritizes nodes according to the configured memory resources on each node. Those nodes with the fewest configured memory resources, that still meet the job's resource constraints, are selected.
CONTIGUOUS Contiguous Allocates nodes in contiguous (linear) blocks as required by the Compaq RMS system.
MAXBALANCE ProcessorSpeedBalance Attempts to allocate the most balanced set of nodes possible to a job. In most cases, but not all, the metric for balance of the nodes is node procspeed. Thus, if possible, nodes with identical procspeeds are allocated to the job. If identical procspeed nodes cannot be found, the algorithm allocates the set of nodes with the minimum node procspeed span or range.

4.8.4 User-Defined Algorithms

User-defined algorithms allow administrators to define their own algorithms based on factors such as their system's network topology. When node allocation is based on topology, jobs finish faster, administrators see better cluster productivity and users pay less for resources.

4.8.4.A PLUGIN

This algorithm allows administrators to define their own node allocation policy and create a plug-in that allocates nodes based on factors such as a cluster's network topology. This has the following advantages:

4.8.5 Specifying Per Job Resource Preferences

While the resource based node allocation algorithms can make a good guess at what compute resources would best satisfy a job, sites often possess a subset of jobs that benefit from more explicit resource allocation specification. For example one job may perform best on a particular subset of nodes due to direct access to a tape drive, another may be very memory intensive. Resource preferences are distinct from node requirements. While the former describes what a job needs to run at all, the latter describes what the job needs to run well. In general, a scheduler must satisfy a job's node requirement specification and then satisfy the job's resource preferences as well as possible.

4.8.5.A Specifying Resource Preferences

A number of resource managers natively support the concept of resource preferences (such as Loadleveler). When using these systems, the language specific preferences keywords may be used. For systems that do not support resource preferences natively, Moab provides a resource manager extension keyword, "PREF," which you can use to specify desired resources. This extension allows specification of node features, memory, swap, and disk space conditions that define whether the node is considered preferred.

Moab 5.2 (and earlier) only supports feature-based preferences.

4.8.5.B Selecting Preferred Resources

Enforcing resource preferences is not completely straightforward. A site may have a number of potentially conflicting requirements that the scheduler is asked to simultaneously satisfy. For example, a scheduler may be asked to maximize the proximity of the allocated nodes at the same time it is supposed to satisfy resource preferences and minimize node overcommitment. To allow site specific weighting of these varying requirements, Moab allows resource preferences to be enabled through the PRIORITY node allocation algorithm. For example, to use resource preferences together with node load, the following configuration might be used:

NODEALLOCATIONPOLICY PRIORITY
NODECFG[DEFAULT]     PRIORITYF='5 * PREF - LOAD'
...

To request specific resource preferences, a user could then submit a job indicating those preferences. In the case of a PBS job, the following can be used:

> qsub -l nodes=4,walltime=1:00:00,pref=feature:fast

Related Topics 

© 2016 Adaptive Computing