You are here: Manual Installation > Installation > Installing Torque Resource Manager

2.2 Installing Torque Resource Manager

If you intend to use Torque Resource Manager 6.1.1.1 with Moab Workload Manager, you must run Moab version 8.0 or later. However, some Torque functionality may not be available. See Compatibility Requirements in the Moab HPC Suite Release Notes for more information.

This topic contains instructions on how to install and start Torque Resource Manager (Torque).

For Cray systems, Adaptive Computing recommends that you install Moab and Torque Servers (head nodes) on commodity hardware (not on Cray compute/service/login nodes).

However, you must install the Torque pbs_mom daemon and Torque client commands on Cray login and "mom" service nodes since the pbs_mom must run on a Cray service node within the Cray system so it has access to the Cray ALPS subsystem.

See Installation Notes for Moab and Torque for Cray in the Moab Workload Manager Administrator Guide for instructions on installing Moab and Torque on a non-Cray server.

In this topic:

2.2.1 Open Necessary Ports

Torque requires certain ports to be open for essential communication.

If your site is running firewall software on its hosts, you will need to configure the firewall to allow connections to the necessary ports.

Location Ports Functions When Needed
Torque Server Host 15001 Torque Client and MOM communication to Torque Server Always
Torque MOM Host (Compute Nodes) 15002 Torque Server communication to Torque MOMs Always
Torque MOM Host (Compute Nodes) 15003 Torque MOM communication to other Torque MOMs Always

See also:

2.2.2 Install Dependencies, Packages, or Clients

In this section:

2.2.2.A Install Packages

On the Torque Server Host, use the following commands to install the libxml2-devel, openssl-devel, and boost-devel packages.

[root]# zypper install libopenssl-devel libtool libxml2-devel boost-devel gcc gcc-c++ make gmake postfix

2.2.2.B Install hwloc

Using "zypper install hwloc" may install an older, non-supported version.

When cgroups are enabled (recommended), hwloc version 1.9.1 or later is required. NVIDIA K80 requires libhwloc 1.11.0. If cgroups are to be enabled, check the Torque Server Host to see if the required version of hwloc is installed. You can check the version number by running the following command:

  • [root]# hwloc-info --version
  • The following instructions are for installing version 1.9.1.

    If hwloc is not installed or needs to be upgraded to the required version, do the following:

    1. On the Torque Server Host, each Torque MOM Host, and each Torque Client Host, do the following:
      1. Download hwloc-1.9.1.tar.gz from https://www.open-mpi.org/software/hwloc/v1.9.
      2. Run each of the following commands in order.
        [root]# zypper install gcc make
        [root]# tar -xzvf hwloc-1.9.1.tar.gz
        [root]# cd hwloc-1.9.1
        [root]# ./configure
        [root]# make
        [root]# make install
    2. Run the following commands on the Torque Server Host only.
      [root]# echo /usr/local/lib >/etc/ld.so.conf.d/hwloc.conf
      [root]# ldconfig

    2.2.3 Install Torque Server

    You must complete the tasks to install the dependencies, packages, or clients before installing Torque Server. See 2.2.2 Install Dependencies, Packages, or Clients.

    If your configuration uses firewalls, you must also open the necessary ports before installing the Torque Server. See 2.2.1 Open Necessary Ports.

    On the Torque Server Host, do the following:

    1. Download the latest 6.1.1.1 build from the Adaptive Computing website. It can also be downloaded via command line (github method or the tarball distribution).
      • Clone the source from github.

        If git is not installed:

        [root]# zypper install git
        [root]# git clone https://github.com/adaptivecomputing/torque.git -b 6.1.1.1 6.1.1.1 
        [root]# cd 6.1.1.1
        [root]# ./autogen.sh
      • Get the tarball source distribution.
        [root]# zypper install wget
        [root]# wget http://www.adaptivecomputing.com/download/torque/torque-6.1.1.1.tar.gz -O torque-6.1.1.1.tar.gz
        [root]# tar -xzvf torque-6.1.1.1.tar.gz
        [root]# cd torque-6.1.1.1/
    2. Depending on your system configuration, you will need to add ./configure command options.

      At a minimum, you add:

      • ‑‑enable‑cgroups
      • ‑‑with‑hwloc‑path=/usr/local See 1.2.1 Torque for more information.

      These instructions assume you are using cgroups. When cgroups are supported, cpusets are handled by the cgroup cpuset subsystem. If you are not using cgroups, use ‑‑enable‑cpusets instead.

      If ‑‑enable‑gui is part of your configuration, do the following:

      $ cd /usr/lib64
      $ ln -s libXext.so.6.4.0 libXext.so
      $ ln -s libXss.so.1 libXss.so

      When finished, cd back to your install directory.

      See Customizing the Install in the Torque Resource Manager Administrator Guide for more information on which options are available to customize the ./configure command.

    3. Run each of the following commands in order.
      [root]# ./configure --enable-cgroups --with-hwloc-path=/usr/local # add any other specified options
      [root]# make
      [root]# make install
    4. Source the appropriate profile file to add /usr/local/bin and /usr/local/sbin to your path.
    5. [root]# . /etc/profile.d/torque.sh
    6. Initialize serverdb by executing the torque.setup script.
    7. [root]# ./torque.setup root
    8. Add nodes to the /var/spool/torque/server_priv/nodes file. See Specifying Compute Nodes in the Torque Resource Manager Administrator Guide for information on syntax and options for specifying compute nodes.
    9. Configure pbs_server to start automatically at system boot, and then start the daemon.
      [root]# qterm
      [root]# systemctl enable pbs_server.service
      [root]# systemctl start pbs_server.service

    2.2.4 Install Torque MOMs

    In most installations, you will install a Torque MOM on each of your compute nodes.

    See Specifying Compute Nodes or Configuring Torque on Compute Nodes in the Torque Resource Manager Administrator Guide for more information.

    Do the following:

    1. On the Torque Server Host, do the following:
      1. Create the self-extracting packages that are copied and executed on your nodes.
        [root]# make packages
        Building ./torque-package-clients-linux-x86_64.sh ...
        Building ./torque-package-mom-linux-x86_64.sh ...
        Building ./torque-package-server-linux-x86_64.sh ...
        Building ./torque-package-gui-linux-x86_64.sh ...
        Building ./torque-package-devel-linux-x86_64.sh ...
        Done.
        
        The package files are self-extracting packages that can be copied and executed on your production machines.  Use --help for options.
      2. Copy the self-extracting MOM packages to each Torque MOM Host.

        Adaptive Computing recommends that you use a remote shell, such as SSH, to install packages on remote systems. Set up shared SSH keys if you do not want to supply a password for each Torque MOM Host.

        [root]# scp torque-package-mom-linux-x86_64.sh <mom-node>:
        
      3. Copy the pbs_mom startup script to each Torque MOM Host.
        [root]# scp contrib/systemd/pbs_mom.service <mom-node>:/usr/lib/systemd/system/
      4. Not all sites see an inherited ulimit but those that do can change the ulimit in the pbs_mom init script. The pbs_mom init script is responsible for starting and stopping the pbs_mom process.

    2. On each Torque MOM Host, do the following:
      1. Install cgroup-tools.
        [root]# zypper install libcgroup-tools
      2. Install the self-extracting MOM package.
        [root]# ./torque-package-mom-linux-x86_64.sh --install
      3. Configure pbs_mom to start at system boot, and then start the daemon.

        [root]# systemctl enable pbs_mom.service
        [root]# systemctl start pbs_mom.service

    2.2.5 Install Torque Clients

    If you want to have the Torque client commands installed on hosts other than the Torque Server Host (such as the compute nodes or separate login nodes), do the following:

    1. On the Torque Server Host, do the following:
      1. Copy the self-extracting client package to each Torque Client Host.

        Adaptive Computing recommends that you use a remote shell, such as SSH, to install packages on remote systems. Set up shared SSH keys if you do not want to supply a password for each Torque Client Host.

        [root]# scp torque-package-clients-linux-x86_64.sh <torque-client-host>:
      2. Copy the trqauthd startup script to each Torque Client Host.
        [root]# scp contrib/systemd/trqauthd.service <torque-client-host>:/usr/lib/systemd/system/
    2. On each Torque Client Host, install the self-extracting client package:
      [root]# ./torque-package-clients-linux-x86_64.sh --install

    2.2.6 Configure Data Management

    When a batch job completes, stdout and stderr files are generated and placed in the spool directory on the master Torque MOM Host for the job instead of the submit host. You can configure the Torque batch environment to copy the stdout and stderr files back to the submit host. See Configuring Data Management in the Torque Resource Manager Administrator Guide for more information.

    Related Topics 

    © 2017 Adaptive Computing