TORQUE Resource Manager > Setting Server Policies > Server High Availability

Server High Availability

You can now run TORQUE in a redundant or high availability mode. This means that there can be multiple instances of the server running and waiting to take over processing in the event that the currently running server fails.

The high availability feature is available in the 2.3 and later versions of TORQUE. TORQUE 2.4 includes several enhancements to high availability (see Server High Availability).

Contact Adaptive Computing before attempting to implement any type of high availability.

For more details, see these sections:

Redundant server host machines

High availability enables Moab HPC Suite to continue running even if pbs_server is brought down. This is done by running multiple copies of pbs_server which have their torque/server_priv directory mounted on a shared file system.

Do not use symlinks when sharing the TORQUE home directory or server_priv directories. A workaround for this is to use mount --rbind /path/to/share /var/spool/torque. Also, it is highly recommended that you only share the server_priv and not the entire $TORQUEHOMEDIR.

The torque/server_name must include the host names of all nodes that run pbs_server. All MOM nodes also must include the host names of all nodes running pbs_server in their torque/server_name file. The syntax of the torque/server_name is a comma delimited list of host names.

For example:

host1,host2,host3

When configuring high availability, do not use $pbsserver to specify the host names. You must use the $TORQUEHOMEDIR/server_name file.

All instances of pbs_server need to be started with the --ha command line option that allows the servers to run at the same time. Only the first server to start will complete the full startup. The second server to start will block very early in the startup when it tries to lock the file torque/server_priv/server.lock. When the second server cannot obtain the lock, it will spin in a loop and wait for the lock to clear. The sleep time between checks of the lock file is one second.

Notice that not only can the servers run on independent server hardware, there can also be multiple instances of the pbs_server running on the same machine. This was not possible before as the second one to start would always write an error and quit when it could not obtain the lock.

Enabling High Availability

To use high availability, you must start each instance of pbs_server with the --ha option.

Prior to version 4.0, TORQUE with HA was configured with an --enable-high-availability option. That option is no longer required.

Three server options help manage high availability. The server parameters are lock_file, lock_file_update_time, and lock_file_check_time.

The lock_file option allows the administrator to change the location of the lock file. The default location is torque/server_priv. If the lock_file option is used, the new location must be on the shared partition so all servers have access.

The lock_file_update_time and lock_file_check_time parameters are used by the servers to determine if the primary server is active. The primary pbs_server will update the lock file based on the lock_file_update_time (default value of 3 seconds). All backup pbs_servers will check the lock file as indicated by the lock_file_check_time parameter (default value of 9 seconds). The lock_file_update_time must be less than the lock_file_check_time. When a failure occurs, the backup pbs_server takes up to the lock_file_check_time value to take over.

> qmgr -c "set server lock_file_check_time=5"

In the above example, after the primary pbs_server goes down, the backup pbs_server takes up to 5 seconds to take over. It takes additional time for all MOMs to switch over to the new pbs_server.

The clock on the primary and redundant servers must be synchronized in order for high availability to work. Use a utility such as NTP to ensure your servers have a synchronized time.

Do not use anything but a plain simple NFS fileshare that is not used by anybody or anything else (i.e., only Moab can use the fileshare).

Do not use any general-purpose NAS, do not use any parallel file system, and do not use company-wide shared infrastructure to set up Moab high availability using "native" high availability.

Enhanced High Availability with Moab

When TORQUE is run with an external scheduler such as Moab, and the pbs_server is not running on the same host as Moab, pbs_server needs to know where to find the scheduler. To do this, use the -l option as demonstrated in the example below (the port is required and the default is 15004).

> pbs_server -l <moabhost:port>

If Moab is running in HA mode, add a -l option for each redundant server.

> pbs_server -l <moabhost1:port> -l <moabhost2:port>

If pbs_server and Moab run on the same host, use the --ha option as demonstrated in the example below.

> pbs_server --ha

The root user of each Moab host must be added to the operators and managers lists of the server. This enables Moab to execute root level operations in TORQUE.

How Commands Select the Correct Server Host

The various commands that send messages to pbs_server usually have an option of specifying the server name on the command line, or if none is specified will use the default server name. The default server name comes either from the environment variable PBS_DEFAULT or from the file torque/server_name.

When a command is executed and no explicit server is mentioned, an attempt is made to connect to the first server name in the list of hosts from PBS_DEFAULT or torque/server_name. If this fails, the next server name is tried. If all servers in the list are unreachable, an error is returned and the command fails.

Note that there is a period of time after the failure of the current server during which the new server is starting up where it is unable to process commands. The new server must read the existing configuration and job information from the disk, so the length of time that commands cannot be received varies. Commands issued during this period of time might fail due to timeouts expiring.

Job Names

Job names normally contain the name of the host machine where pbs_server is running. When job names are constructed, only the server name in $PBS_DEFAULT or the first name from the server specification list, $TORQUE_HOME/server_name, is used in building the job name.

Persistence of the pbs_server Process

The system administrator must ensure that pbs_server continues to run on the server nodes. This could be as simple as a cron job that counts the number of pbs_server's in the process table and starts some more if needed.

High Availability of the NFS Server

Before installing a specific NFS HA solution please contact Adaptive Computing Support for a detailed discussion on NFS HA type and implementation path.

One consideration of this implementation is that it depends on NFS file system also being redundant. NFS can be set up as a redundant service. See the following.

There are also other ways to set up a shared file system. See the following:

Installing TORQUE in High Availability Mode

The following procedure demonstrates a TORQUE installation in high availability (HA) mode.

To install TORQUE in HA mode

  1. Stop all firewalls or update your firewall to allow traffic from TORQUE services.
  2. > service iptables stop

    > chkconfig iptables off

    If you are unable to stop the firewall due to infrastructure restriction, open the following ports:

    • 15001[tcp,udp]
    • 15002[tcp,udp]
    • 15003[tcp,udp]
  3. Disable SELinux

    > vi /etc/sysconfig/selinux

     

    SELINUX=disabled

  4. Update your main ~/.bashrc profile to ensure you are always referencing the applications to be installed on all servers.
  5. # TORQUE

    export TORQUEHOME=/var/spool/torque

     

    # Library Path

     

    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${TORQUEHOME}/lib

     

    # Update system paths

    export PATH=${TORQUEHOME}/bin:${TORQUEHOME}/sbin:$ {PATH}

  6. Verify server1 and server2 are resolvable via either DNS or looking for an entry in the /etc/hosts file.
  7. Configure the NFS Mounts by following these steps:
    1. Create mount point folders on fileServer.
    2. fileServer# mkdir -m 0755 /var/spool/torque

      fileServer# mkdir -m 0750 /var/spool/torque/server_priv

    3. Update /etc/exports on fileServer. The IP addresses should be that of server2.
    4. /var/spool/torque/server_priv 192.168.0.0/255.255.255.0(rw,sync,no_root_squash)
    5. Update the list of NFS exported file systems.
    6. fileServer# exportfs -r

  8. If the NFS daemons are not already running on fileServer, start them.
  9. > systemctl restart rpcbind.service

    > systemctl start nfs-server.service

    > systemctl start nfs-lock.service

    > systemctl start nfs-idmap.service

  10. Mount the exported file systems on server1 by following these steps:
    1. Create the directory reference and mount them.
    2. server1# mkdir /var/spool/torque/server_priv

      Repeat this process for server2.

    3. Update /etc/fstab on server1 to ensure that NFS mount is performed on startup.
    4. fileServer:/var/spool/torque/server_priv /var/spool/torque/server_priv nfs rsize= 8192,wsize=8192,timeo=14,intr

      Repeat this step for server2.

  11. Install TORQUE by following these steps:
    1. Download and extract TORQUE 5.1.0 on server1.
    2. server1# wget http://github.com/adaptivecomputing/torque/ branches/<version>/torque-<version>.tar.gz

      server1# tar -xvzf torque-<version>.tar.gz

    3. Navigate to the TORQUE directory and compile TORQUE on server1.
    4. server1# configure

      server1# make

      server1# make install

      server1# make packages

    5. If the installation directory is shared on both head nodes, then run make install on server1.
    6. server1# make install

      If the installation directory is not shared, repeat step 8a-b (downloading and installing TORQUE) on server2.

  12. Start trqauthd.

    server1# /etc/init.d/trqauthd start

  13. Configure TORQUE for HA.
    1. List the host names of all nodes that run pbs_server in the torque/server_name file. You must also include the host names of all nodes running pbs_server in the torque/server_name file of each MOM node. The syntax of torque/server_name is a comma-delimited list of host names.

    2. server1

      server2

    3. Create a simple queue configuration for TORQUE job queues on server1.
    4. server1# pbs_server -t create

      server1# qmgr -c “set server scheduling=true”

      server1# qmgr -c “create queue batch queue_type=execution”

      server1# qmgr -c “set queue batch started=true”

      server1# qmgr -c “set queue batch enabled=true”

      server1# qmgr -c “set queue batch resources_default.nodes=1”

      server1# qmgr -c “set queue batch resources_default.walltime=3600”

      server1# qmgr -c “set server default_queue=batch”

      Because server_priv/* is a shared drive, you do not need to repeat this step on server2.

    5. Add the root users of TORQUE to the TORQUE configuration as an operator and manager.
    6. server1# qmgr -c “set server managers += root@server1”

      server1# qmgr -c “set server managers += root@server2”

      server1# qmgr -c “set server operators += root@server1”

      server1# qmgr -c “set server operators += root@server2”

      Because server_priv/* is a shared drive, you do not need to repeat this step on Server 2.

    7. You must update the lock file mechanism for TORQUE in order to determine which server is the primary. To do so, use the lock_file_update_time and lock_file_check_time parameters. The primary pbs_server will update the lock file based on the specified lock_file_update_time (default value of 3 seconds). All backup pbs_servers will check the lock file as indicated by the lock_file_check_time parameter (default value of 9 seconds). The lock_file_update_time must be less than the lock_file_check_time. When a failure occurs, the backup pbs_server takes up to the lock_file_check_time value to take over.

      server1# qmgr -c “set server lock_file_check_time=5”

      server1# qmgr -c “set server lock_file_update_time=3”

      Because server_priv/* is a shared drive, you do not need to repeat this step on server2.

    8. List the servers running pbs_server in the TORQUE acl_hosts file.
    9. server1# qmgr -c “set server acl_hosts += server1”

      server1# qmgr -c “set server acl_hosts += server2”

      Because server_priv/* is a shared drive, you do not need to repeat this step on server2.

    10. Restart the running pbs_server in HA mode.
    11. server1# qterm

    12. Start the pbs_server on the secondary server.
    13. server1# pbs_server --ha -l server2:port

      server2# pbs_server --ha -l server1:port

  14. Check the status of TORQUE in HA mode.
  15. server1# qmgr -c “p s”

    server2# qmgr -c “p s”

    The commands above returns all settings from the active TORQUE server from either node.

    Drop one of the pbs_servers to verify that the secondary server picks up the request.

    server1# qterm

    server2# qmgr -c “p s”

    Stop the pbs_server on server2 and restart pbs_server on server1 to verify that both nodes can handle a request from the other.

  16. Install a pbs_mom on the compute nodes.
    1. Copy the install scripts to the compute nodes and install.
    2. Navigate to the shared source directory of TORQUE and run the following:
    3. node1# torque-package-mom-linux-x86_64.sh --install

      node2# torque-package-clients-linux-x86_64.sh --install

      Repeat this for each compute node. Verify that the /var/spool/torque/server-name file shows all your compute nodes.

    4. On server1 or server2, configure the nodes file to identify all available MOMs. To do so, edit the /var/spool/torque/server_priv/nodes file.
    5. node1 np=2

      node2 np=2

      Change the np flag to reflect number of available processors on that node.

    6. Recycle the pbs_servers to verify that they pick up the MOM configuration.
    7. server1# qterm; pbs_server --ha -l server2:port

      server2# qterm; pbs_server --ha -l server1:port

    8. Start the pbs_mom on each execution node.
    9. node5# pbs_mom

      node6# pbs_mom

Installing TORQUE in High Availability Mode on Headless Nodes

The following procedure demonstrates a TORQUE installation in high availability (HA) mode on nodes with no local hard drive.

To install TORQUE in HA mode on a node with no local hard drive

  1. Stop all firewalls or update your firewall to allow traffic from TORQUE services.
  2. > service iptables stop

    > chkconfig iptables off

    If you are unable to stop the firewall due to infrastructure restriction, open the following ports:

    • 15001[tcp,udp]
    • 15002[tcp,udp]
    • 15003[tcp,udp]
  3. Disable SELinux

    > vi /etc/sysconfig/selinux

     

    SELINUX=disabled

  4. Update your main ~/.bashrc profile to ensure you are always referencing the applications to be installed on all servers.
  5. # TORQUE

    export TORQUEHOME=/var/spool/torque

     

    # Library Path

     

    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${TORQUEHOME}/lib

     

    # Update system paths

    export PATH=${TORQUEHOME}/bin:${TORQUEHOME}/sbin:$ {PATH}

  6. Verify server1 and server2 are resolvable via either DNS or looking for an entry in the /etc/hosts file.
  7. Configure the NFS Mounts by following these steps:
    1. Create mount point folders on fileServer.
    2. fileServer# mkdir -m 0755 /var/spool/torque

    3. Update /etc/exports on fileServer. The IP addresses should be that of server2.
    4. /var/spool/torque/ 192.168.0.0/255.255.255.0(rw,sync,no_root_squash)
    5. Update the list of NFS exported file systems.
    6. fileServer# exportfs -r

  8. If the NFS daemons are not already running on fileServer, start them.
  9. > systemctl restart rpcbind.service

    > systemctl start nfs-server.service

    > systemctl start nfs-lock.service

    > systemctl start nfs-idmap.service

  10. Mount the exported file systems on server1 by following these steps:
    1. Create the directory reference and mount them.
    2. server1# mkdir /var/spool/torque

      Repeat this process for server2.

    3. Update /etc/fstab on server1 to ensure that NFS mount is performed on startup.
    4. fileServer:/var/spool/torque/server_priv /var/spool/torque/server_priv nfs rsize= 8192,wsize=8192,timeo=14,intr

      Repeat this step for server2.

  11. Install TORQUE by following these steps:
    1. Download and extract TORQUE 5.1.0 on server1.
    2. server1# wget http://github.com/adaptivecomputing/torque/ branches/<version>/torque-<version>.tar.gz

      server1# tar -xvzf torque-<version>.tar.gz

    3. Navigate to the TORQUE directory and compile TORQUE with the HA flag on server1.
    4. server1# configure --prefix=/var/spool/torque

      server1# make

      server1# make install

      server1# make packages

    5. If the installation directory is shared on both head nodes, then run make install on server1.
    6. server1# make install

      If the installation directory is not shared, repeat step 8a-b (downloading and installing TORQUE) on server2.

  12. Start trqauthd.

    server1# /etc/init.d/trqauthd start

  13. Configure TORQUE for HA.
    1. List the host names of all nodes that run pbs_server in the torque/server_name file. You must also include the host names of all nodes running pbs_server in the torque/server_name file of each MOM node. The syntax of torque/server_name is a comma-delimited list of host names.

    2. server1,server2

    3. Create a simple queue configuration for TORQUE job queues on server1.
    4. server1# pbs_server -t create

      server1# qmgr -c “set server scheduling=true”

      server1# qmgr -c “create queue batch queue_type=execution”

      server1# qmgr -c “set queue batch started=true”

      server1# qmgr -c “set queue batch enabled=true”

      server1# qmgr -c “set queue batch resources_default.nodes=1”

      server1# qmgr -c “set queue batch resources_default.walltime=3600”

      server1# qmgr -c “set server default_queue=batch”

      Because TORQUEHOME is a shared drive, you do not need to repeat this step on server2.

    5. Add the root users of TORQUE to the TORQUE configuration as an operator and manager.
    6. server1# qmgr -c “set server managers += root@server1”

      server1# qmgr -c “set server managers += root@server2”

      server1# qmgr -c “set server operators += root@server1”

      server1# qmgr -c “set server operators += root@server2”

      Because TORQUEHOME is a shared drive, you do not need to repeat this step on server2.

    7. You must update the lock file mechanism for TORQUE in order to determine which server is the primary. To do so, use the lock_file_update_time and lock_file_check_time parameters. The primary pbs_server will update the lock file based on the specified lock_file_update_time (default value of 3 seconds). All backup pbs_servers will check the lock file as indicated by the lock_file_check_time parameter (default value of 9 seconds). The lock_file_update_time must be less than the lock_file_check_time. When a failure occurs, the backup pbs_server takes up to the lock_file_check_time value to take over.

      server1# qmgr -c “set server lock_file_check_time=5”

      server1# qmgr -c “set server lock_file_update_time=3”

      Because TORQUEHOME is a shared drive, you do not need to repeat this step on server2.

    8. List the servers running pbs_server in the TORQUE acl_hosts file.
    9. server1# qmgr -c “set server acl_hosts += server1”

      server1# qmgr -c “set server acl_hosts += server2”

      Because TORQUEHOME is a shared drive, you do not need to repeat this step on server2.

    10. Restart the running pbs_server in HA mode.
    11. server1# qterm

    12. Start the pbs_server on the secondary server.
    13. server1# pbs_server --ha -l server2:port

      server2# pbs_server --ha -l server1:port

  14. Check the status of TORQUE in HA mode.
  15. server1# qmgr -c “p s”

    server2# qmgr -c “p s”

    The commands above returns all settings from the active TORQUE server from either node.

    Drop one of the pbs_servers to verify that the secondary server picks up the request.

    server1# qterm

    server2# qmgr -c “p s”

    Stop the pbs_server on server2 and restart pbs_server on server1 to verify that both nodes can handle a request from the other.

  16. Install a pbs_mom on the compute nodes.
    1. On server1 or server2, configure the nodes file to identify all available MOMs. To do so, edit the / var/spool/torque/server_priv/nodes file.
    2. node1 np=2

      node2 np=2

      Change the np flag to reflect number of available processors on that node.

    3. Recycle the pbs_servers to verify that they pick up the MOM configuration.
    4. server1# qterm; pbs_server --ha -l server2:port

      server2# qterm; pbs_server --ha -l server1:port

    5. Start the pbs_mom on each execution node.
    6. server1# pbs_mom -d <mom-server1>

      server2# pbs_mom -d <mom-server2>

Example Setup of High Availability

  1. The machines running pbs_server must have access to a shared server_priv/ directory (usually an NFS share on a MoM).
  2. All MoMs must have the same content in their server_name file. This can be done manually or via an NFS share. The server_name file contains a comma-delimited list of the hosts that run pbs_server.
  3. # List of all servers running pbs_server

    server1,server2

  4. The machines running pbs_server must be listed in acl_hosts.
  5. > qmgr -c "set server acl_hosts += server1"

    > qmgr -c "set server acl_hosts += server2"

  6. Start pbs_server with the --ha option.
  7. [root@server1]$ pbs_server --ha

     

    [root@server2]$ pbs_server --ha

Related Topics 

© 2015 Adaptive Computing