You are here: Manual Installation > Upgrading > Upgrading Torque

2.17 Upgrading Torque Resource Manager

Torque 6.0 binaries are backward compatible with Torque 5.0 or later. However they are not backward compatible with Torque versions prior to 5.0. When you upgrade to Torque 6.0.2 from versions prior to 5.0, all MOM and server daemons must be upgraded at the same time.

The job format is compatible between 6.0 and previous versions of Torque and any queued jobs will upgrade to the new version. It is not recommended to upgrade Torque while jobs are in a running state.

This topic contains instructions on how to upgrade and start Torque Resource Manager (Torque).

If you need to upgrade a Torque version prior to 4.0, contact Adaptive Computing.

See Considerations Before Upgrading in the Torque Resource Manager Administrator Guide for additional important information including about how to handle running jobs during an upgrade, mixed server/MOM versions, and the possibility of upgrading the MOMs without having to take compute nodes offline.

In this topic:

2.17.1 Before You Upgrade

This section contains information of which you should be aware before upgrading.

In this section:

2.17.1.A serverdb

The pbs_server configuration is saved in the file TORQUE_HOME/server_priv/serverdb. When running Torque 4.1 or later for the first time, this file converts from a binary file to an XML-like format.

Recommended: before shutting down pbs_server to upgrade it, make a backup of the settings in serverdb by running the following command:

qmgr -c "print server" > qmgr.backup

In the event of a loss of settings, this can by restored by running the following command:

qmgr < qmgr.backup

2.17.1.B Running Jobs

Before upgrading the system, all running jobs must complete. To prevent queued jobs from starting, nodes can be set to offline or all queues can be disabled (using the "started" queue attribute). See pbsnodes or Queue Attributes in the Torque Resource Manager Administrator Guide for more information.

2.17.1.C Cray Systems

For upgrading Torque to 6.0.2 on a Cray system, refer to the Installation Notes for Moab and Torque for Cray in Appendix G of the Moab Workload Manager Administrator Guide.

2.17.2 Stop Torque Services

Do the following:

  1. On the Torque Server Host, shut down the Torque server.
    [root]# service pbs_server stop
  2. On each Torque MOM Host, shut down the Torque MOM service.

    Confirm all jobs have completed before stopping pbs_mom. You can do this by typing "momctl -d3". If there are no jobs running, you will see the message "NOTE: no local jobs detected" towards the bottom of the output. If jobs are still running and the MOM is shutdown, you will only be able to track when the job completes and you will not be able to get completion codes or statistics.

    [root]# service pbs_mom stop
  3. On each Torque Client Host (including the Moab Server Host, the Torque Server Host, and the Torque MOM Hosts, if applicable), shut down the trqauthd service.
    [root]# service trqauthd stop

2.17.3 Upgrade the Torque Server

You must complete all the previous upgrade steps in this topic before upgrading Torque server. See the list of steps at the beginning of this topic.

On the Torque Server Host, do the following:

  1. Back up your server_priv directory.
    [root]# tar -cvf backup.tar.gz TORQUE_HOME/server_priv
  2. If not already installed, install the Boost C++ headers.
    [root]# yum install boost-devel
  3. Download the latest 6.0.2 build from the Adaptive Computing website.
  4. Install the latest Torque tarball.
    [root]# cd /tmp
    [root]# tar xzvf torque-<version>-<build number>.tar.gz
    [root]# cd torque-<version>-<build number>
    [root]# ./configure
    [root]# make
    [root]# make install
  5. Update the pbs_server service startup script.
    1. Make a backup of your current service startup script.
      [root]# cp /etc/init.d/pbs_server pbs_server.bak
    2. Copy in the new stock service startup script.
      [root]# cp contrib/init.d/pbs_server /etc/init.d
    3. Merge in any customizations.
      [root]# vi /etc/init.d/pbs_server

2.17.4 Update the Torque MOMs

Do the following:

  1. On the Torque Server Host, do the following:
    1. Create the self-extracting packages that are copied and executed on your nodes.
      [root]# make packages
      Building ./torque-package-clients-linux-x86_64.sh ...
      Building ./torque-package-mom-linux-x86_64.sh ...
      Building ./torque-package-server-linux-x86_64.sh ...
      Building ./torque-package-gui-linux-x86_64.sh ...
      Building ./torque-package-devel-linux-x86_64.sh ...
      Done.
      
      The package files are self-extracting packages that can be copied and executed on your production machines.  Use --help for options.
    2. Copy the self-extracting packages to each Torque MOM Host.

      Adaptive Computing recommends that you use a remote shell, such as SSH, to install packages on remote systems. Set up shared SSH keys if you do not want to supply a password for each Torque MOM Host.

      [root]# scp torque-package-mom-linux-x86_64.sh <torque-mom-host>:
    3. Copy the pbs_mom startup script to each Torque MOM Host.
      [root]# scp contrib/init.d/pbs_mom <torque-mom-host>:/etc/init.d
  2. On each Torque MOM Host, do the following:

    This step can be done from the Torque server from a remote shell, such as SSH. Set up shared SSH keys if you do not want to supply a password for each Torque MOM Host.

    [root]# ./torque-package-mom-linux-x86_64.sh --install

2.17.5 Update the Torque Clients

This section contains instructions on updating the Torque clients on the Torque Client Hosts (including the Moab Server Host and Torque MOM Hosts, if applicable).

  1. On the Torque Server Host, do the following:
    1. Copy the self-extracting packages to each Torque Client Host.

      Adaptive Computing recommends that you use a remote shell, such as SSH, to install packages on remote systems. Set up shared SSH keys if you do not want to supply a password for each Torque MOM Host.

      [root]# scp torque-package-clients-linux-x86_64.sh <torque-client-host>:
    2. Copy the trqauthd startup script to each Torque Client Host.
      [root]# scp contrib/init.d/trqauthd <torque-client-host>:/etc/init.d
  2. On each Torque Client Host, do the following:

    This step can be done from the Torque server from a remote shell, such as SSH. Set up shared SSH keys if you do not want to supply a password for each Torque Client Host.

    [root]# ./torque-package-clients-linux-x86_64.sh --install

2.17.6 Start Torque Services

Do the following:

  1. On each Torque Client Host (including the Moab Server Host, Torque Server Host and Torque MOM Hosts, if applicable), start up the trqauthd service.
    [root]# service trqauthd start
  2. On each Torque MOM Host, start up the Torque MOM service.
    [root]# service pbs_mom start
  3. On the Torque Server Host, start up the Torque server.
    [root]# service pbs_server start

2.17.7 Perform Status and Error Checks

On the Torque Server Host, do the following:

  1. Check the status of the jobs in the queue.
    [root]# qstat
  2. Check for errors.
    [root]# grep -i error /var/spool/torque/server_logs/*
    [root]# grep -i error /var/spool/torque/mom_logs/*

© 2016 Adaptive Computing