Moab Workload Manager

17.10 Grid Scheduling Policies

17.10.1 Peer-to-Peer Resource Affinity Overview

The concept of resource affinity stems from a number of facts:

  • Certain compute architectures are able to execute certain compute jobs more effectively than others.
  • From a given location, staging jobs to various clusters may require more expensive allocations, more data and network resources, and more use of system services.
  • Certain compute resources are owned by external organizations and should be used sparingly.

Regardless of the reason, Moab servers allow the use of peer resource affinity to guide jobs to the clusters that make the best fit according to a number of criteria.

At a high level, this is accomplished by creating a number of job templates and associating the profiles with different peers with varying impacts on estimated execution time and peer affinity.

17.10.2 Peer Allocation Policies

A direct way to assign a peer allocation algorithm is with the PARALLOCATIONPOLICY parameter (does not apply to Master/Slave grids). Legal values are listed in the following table:

Value Description
Allocates resources from the eligible peer with the fewest available resources; measured in tasks (minimizes fragmentation of large resource blocks).
Allocates resources from the eligible peer with the fewest available resources; measured in percent of configured resources (minimizes fragmentation of large resource blocks).
Allocates resources from the eligible peer that can start the job the soonest.
Allocates resources from the eligible peer that can complete the job the soonest. (Takes into account data staging time and job-specific machine speed.)
Allocates resources from the eligible peer with the most available resources; measured in tasks (balances workload distribution across potential peers).
Allocates resources from the eligible peer with the most available resources; measured in percent of configured resources (balances workload distribution across potential peers).
Allocates resources from the eligible peer that has been least recently allocated.

Note The mdiag -t -v command can be used to view current calculated partition priority values.

17.10.3 Per-partition Scheduling

Per-partition scheduling can be enabled by adding the following lines to moab.cfg:

PERPARTITIONSCHEDULING TRUE
JOBMIGRATEPOLICY JUSTINTIME

To use per-partition scheduling, you must configure fairshare trees where particular users have higher priorites on one partition, and other users have higher priorities on a different partition.