Moab Workload Manager

5.4 Node Availability Policies

Moab enables several features relating to node availability. These include policies that determine how per node resource availability should be reported, how node failures are detected, and what should be done in the event of a node failure.

5.4.1 Node Resource Availability Policies

Moab allows a job to be launched on a given compute node as long as the node is not full or busy. The NODEAVAILABILITYPOLICY parameter allows a site to determine what criteria constitutes a node being busy. The legal settings are listed in the following table:

Availability Policy Description
The node is considered busy if dedicated resources equal or exceed configured resources.
The node is considered busy if utilized resources equal or exceed configured resources.
The node is considered busy if either dedicated or utilized resources equal or exceed configured resources.

The default setting for all nodes is COMBINED, indicating that a node can accept workload so long as the jobs that the node was allocated to do not request or use more resources than the node has available. In a load balancing environment, this may not be the desired behavior. Setting the NODEAVAILABILITYPOLICY parameter to UTILIZED allows jobs to be packed onto a node even if the aggregate resources requested exceeds the resources configured. For example, assume a scenario with a 4-processor compute node and 8 jobs requesting 1 processor each. If the resource availability policy was set to COMBINED, this node would only allow 4 jobs to start on this node even if the jobs induced a load of less than 1.0 each. With the resource availability policy set to UTILIZED, the scheduler continues allowing jobs to start on the node until the node's load average exceeds a per processor load value of 1.0 (in this case, a total load of 4.0). To prevent a node from being over populated within a single scheduling iteration, Moab artificially raises the node's load for one scheduling iteration when starting a new job. On subsequent iterations, the actual measured node load information is used.

Per Resource Availability Policies

By default, the NODEAVAILABILITYPOLICY sets a global per node resource availability policy. This policy applies to all resource types on each node such as processors, memory, swap, and local disk. However, the syntax of this parameter is as follows:

<POLICY>[:<RESOURCETYPE>] ...

This syntax allows per resource availability specification. For example, consider the following:

NODEAVAILABILITYPOLICY  DEDICATED:PROC COMBINED:MEM COMBINED:DISK
...

This configuration causes Moab to only consider the quantity of processing resources actually dedicated to active jobs running on each node and ignore utilized processor information (such as CPU load). For memory and disk, both utilized resource information and dedicated resource information should be combined to determine what resources are actually available for new jobs.

5.4.2 Node Categorization

Moab allows organizations to detect and use far richer information regarding node status than the standard batch idle, busy, down states commonly found. Using node categorization, organizations can record, track, and report on per node and cluster level status including the following categories:

Category Description
Node is healthy and currently executing batch workload.
Node is unavailable due to a failure in the underlying batch system (such as a resource manager server or resource manager node daemon).
Node is reserved for benchmarking.
Node is reserved for unscheduled system maintenance.
Node is reserved for grid use.
Node is unavailable due to a failure in one or more aspects of its hardware configuration (such as a power failure, excessive temperature, memory, processor, or swap failure).
Node is reserved for scheduled system maintenance.
Node is healthy and is currently not executing batch workload.
Node is reserved for job use.
Node is unavailable due to a failure in its network adapter or in the switch.
Node is in an uncategorized state.
Node is unavailable due to a general failure.
Node is reserved for dedicated use by a personal reservation.
Site specified usage categorization.
Node is unavailable due to a failure in a local software service (such as automounter, security or information service such as NIS, local databases, or other required software services).
Node is reserved for software maintenance.
Node is reserved by a standing reservation.
Node is unavailable due to a failure in the cluster storage system or local storage infrastructure (such as failures in Lustre, GPFS, PVFS, or SAN).
Node is reserved for dedicated use by a particular user or group and may or may not be actively executing jobs.
Node is reserved for VPC use.

Node categories can be explicitly assigned by cluster administrators using the mrsvctl -c command to create a reservation and associate a category with that node for a specified timeframe. Further, outside of this explicit specification, Moab automatically mines all configured interfaces to learn about its environment and the health of the resources it is managing. Consequently, Moab can identify many hardware failures, software failures, and batch failures without any additional configuration. However, it is often desirable to make additional information available to Moab to allow it to integrate this information into reports; automatically notify managers, users, and administrators; adjust internal policies to steer workload around failures; and launch various custom triggers to rectify or mitigate the problem.

Note You can specify the FORCERSVSUBTYPE parameter to require all administrative reservations be associated with a node category at reservation creation time.

Example

NODECFG[DEFAULT] ENABLEPROFILING=TRUE
FORCERSVSUBTYPE  TRUE

Node health and performance information from external systems can be imported into Moab using the native resource manager interface. This is commonly done using generic metrics or consumable generic resources for performance and node categories or node variables for status information. Combined with arbitrary node messaging information, Moab can combine detailed information from remote services and report this to other external services.

Note Use the NODECATCREDLIST parameter to generate extended node category based statistics.

5.4.3 Node Failure/Performance Based Notification

Moab can be configured to cause node failures and node performance levels that cross specified thresholds to trigger notification events. This is accomplished using the GEVENTCFG parameter as described in the Generic Event Overview section. For example, the following configuration can be used to trigger an email to administrators each time a node is marked down.

GEVENTCFG[nodedown] ACTION=notify REARM=00:20:00
...

5.4.4 Node Failure/Performance Based Triggers

Moab supports per node triggers that can be configured to fire when specific events are fired or specific thresholds are met. These triggers can be used to modify internal policies or take external actions. A few examples follow:

  • decrease node allocation priority if node throughput drops below threshold X
  • launch local diagnostic/recovery script if parallel file system mounts become stale
  • reset high performance network adapters if high speed network connectivity fails
  • create general system reservation on node if processor or memory failure occurs

As mentioned, Moab triggers can be used to initiate almost any action, from sending mail to updating a database, to publishing data for an SNMP trap, to driving a web service.

5.4.5 Handling Transient Node Failures

Since Moab actively schedules both current and future actions of the cluster, it is often important for it to have a reasonable estimate of when failed nodes will be again available for use. This knowledge is particularly useful for proper scheduling of new jobs and management of resources in regard to backfill. With backfill, Moab determines which resources are available for priority jobs and when the highest priority idle jobs can run. If a node experiences a failure, Moab should have a concept of when this node will be restored.

When Moab analyzes down nodes for allocation, one of two issues may occur with the highest priority jobs. If Moab believes that down nodes will not be recovered for an extended period of time, a transient node failure within a reservation for a priority job may cause the reservation to slide far into the future allowing other lower priority jobs to allocate and launch on nodes previously reserved for it. Moments later, when the transient node failures are resolved, Moab may be unable to restore the early reservation start time as other jobs may already have been launched on previously available nodes.

In the reverse scenario, if Moab recognizes a likelihood that down nodes will be restored too quickly, it may make reservations for top priority jobs that allocate those nodes. Over time, Moab slides those reservations further into the future as it determines that the reserved nodes are not being recovered. While this does not delay the start of the top priority jobs, these unfulfilled reservations can end up blocking other jobs that should have properly been backfilled and executed.

Creating Automatic Reservations

If a node experiences occasional transient failures (often not associated with a node state of down), Moab can automatically create a temporary reservation over the node to allow the transient failure time to clear and prevent Moab from attempting to re-use the node while the failure is active. This reservation behavior is controlled using the NODEFAILURERESERVETIME parameter as in the following example:

# reserve nodes for 1 minute if transient failures are detected
NODEFAILURERESERVETIME  00:01:00

Blocking Out Down Nodes

If one or more resource managers identify failures and mark nodes as down, Moab can be configured to associate a default unavailability time with this failure and the node state down. This is accomplished using the NODEDOWNSTATEDELAYTIME parameter. This delay time floats and is measured as a fixed time into the future from the time NOW; it is not associated with the time the node was originally marked down. For example, if the delay time was set to 10 minutes, and a node was marked down 20 minutes ago, Moab would still consider the node unavailable until 10 minutes into the future.

While it is difficult to select a good default value that works for all clusters, the following is a general rule of thumb:

  • Increase NODEDOWNSTATEDELAYTIME if jobs are getting blocked due to priority reservations sliding as down nodes are not recovered.
  • Decrease NODEDOWNSTATEDELAYTIME if high priority job reservations are getting regularly delayed due to transient node failures.

# assume down nodes will not be recovered for one hour
NODEDOWNSTATEDELAYTIME  01:00:00

5.4.6 Reallocating Resources When Failures Occur

If a failure occurs within a collection of nodes allocated to a job or reservation, Moab can automatically re-allocate replacement resources. For jobs, this can be configured with JOBACTIONONNODEFAILURE. For reservations, use the RSVREALLOCPOLICY.

5.4.6.1 Allocated Resource Failure Policy for Jobs

How an active job behaves when one or more of its allocated resources fail depends on the allocated resource failure policy. Depending on the type of job, type of resources, and type of middleware infrastructure, a site may choose to have different responses based on the job, the resource, and the type of failure.

Failure Responses

By default, Moab cancels a job when an allocated resource failure is detected. However, you can specify the following actions:

Policy Description
Cancels job.
Requeues and holds job.
Ignores failure and allows job to continue running.
Migrates failed task to new node.

Note Only available with systems that provide migration.
Notifies administrator and user of failure but takes no further action.
Requeues job and allows it to run when alternate resources become available.

Policy Precedence

For a given job, the applied policy can be set at various levels with policy precedence applied in the job, class/queue, partition, and then system level. The following table indicates the available methods for setting this policy:

Object Parameter Example
resfailpolicy resource manager extension
> qsub -l resfailpolicy=requeue
RESFAILPOLICY attribute of CLASSCFG parameter
CLASSCFG[batch] RESFAILPOLICY=CANCEL
JOBACTIONONNODEFAILURE attribute of PARCFG parameter
PARCFG[web3] JOBACTIONONNODEFAILURE=NOTIFY
NODEALLOCRESFAILUREPOLICY parameter
NODEALLOCRESFAILUREPOLICY=MIGRATE

Failure Definition

Any allocated node going down constitutes a failure. However, for certain types of workload, responses to failures may be different depending on whether it is the master task (task 0) or a slave task that fails. To indicate that the associated policy should only take effect if the master task fails, the allocated resource failure policy should be specified with a trailing asterisk (*), as in the following example:

CLASSCFG[virtual_services] RESFAILPOLICY=requeue*

TORQUE Failure Details

When a node fails becoming unresponsive, the resource manager central daemon identifies this failure within a configurable time frame (default: 60 seconds). Detection of this failure triggers an event that causes Moab to immediately respond. Based on the specified policy, Moab notifies administrators, holds the job, requeues the job, allocates replacement resources to the job, or cancels the job. If the job is canceled or requeued, Moab sends the request to TORQUE, which immediately frees all non-failed resources making them available for use by other jobs. Once the failed node is recovered, it contacts the resource manager central daemon, determines that the associated job has been canceled/requeued, cleans up, and makes itself available for new workload.

See Also