You are here: Installation and Configuration > Manual Installation > Upgrading > Upgrade Nitro

2.9 Upgrading Nitro

This topic contains the steps and procedures to follow to upgrade Nitro using the Manual upgrade method.

In this topic:

2.9.1 Upgrade from a Version Prior to 2.0

Beginning with Nitro 2.0, the licensing procedure changed to use an RLM server. If your company already uses an RLM Server, you can skip this procedure.

The following steps are required if you are upgrading a Nitro version prior to 2.0.

  1. Install or obtain access to an RLM server. See 2.5 Installing RLM Server.

    Beginning with Nitro 2.0, the licensing procedure changed to use an RLM server. If your company already uses an RLM Server, you can skip this procedure.

  2. Obtain and install the Nitro license. This requires access to an RLM server. See 2.6.1 Obtain a Nitro License.
  3. Copy the license file to each compute node (coordinator). On each compute node, or on the shared file system, do the following:
    [root]# cp <licenseFileName>.lic /opt/nitro/bin/

2.9.2 Upgrade Nitro

On the Nitro Host, do the following::

  1. If you have not already done so, complete the steps to prepare the host. See 2.4 Preparing for Manual Installation or Upgrade.
  2. Back up your existing, customized launch scripts, job scripts, and the nitrosub command (if applicable).
    1. In /opt/nitro/bin/, back up the following:
      • launch_nitro.sh
      • launch_worker.sh (version 2.1 or later)
      • nitrosub command (version 2.1 or later)
    2. In /opt/nitro/etc/, back up the following:
      • nitro_job.sh (version 2.1 or later)
      • worker_job.sh (version 2.1 or later)
  3. Change the directory to the root of the unpacked Nitro tarball bundle.

    [root]# cd nitro-tarball-bundle-<version>-<OS>
  4. Identify the Nitro product tarball (nitro-<version>-<OS>.tar.gz) and unpack the tarball into the same directory you created when you first installed Nitro (for example, /opt/nitro).
    [root]# tar xzvpf nitro-<version>-<OS>.tar.gz -C /opt/nitro --strip-components=1
  1. Copy the provided scripts and the nitrosub command from the /opt/nitro/scripts directory.

    This is a "copy" file operation and not a "move" operation. This allows you to customize your version and always have the factory version available for consultation and/or comparison.

    1. Copy the launch_nitro.sh and launch_worker.sh scripts for your resource manager to the bin directory. Each resource manager has a subdirectory with the scripts directory that contains the scripts. This example uses Torque as the resource manager.
      [root]# cp /opt/nitro/scripts/torque/launch_nitro.sh /opt/nitro/bin/
      [root]# cp /opt/nitro/scripts/torque/launch_worker.sh /opt/nitro/bin/
    2. Copy the nitrosub command to the bin directory.
      [root]# cp /opt/nitro/scripts/nitrosub /opt/nitro/bin/
    3. Copy the nitro_job.sh and the worker_job.sh scripts to the etc directory.
      [root]# cp /opt/nitro/scripts/nitro_job.sh /opt/nitro/etc/
      [root]# cp /opt/nitro/scripts/worker_job.sh /opt/nitro/etc/
  2. Merge any customizations from your existing launch scripts, job scripts, and the nitrosub command (if applicable) into the new launch scripts, job scripts, and the nitrosub command that you copied from the scripts directory.
  3. If your system configuration allows multiple coordinators on the same node, additional configuration may be needed. See 2.23 Running Multiple Coordinators on the Same Node for more information.
  4. If you are not using a shared file system, copy the updated Nitro installation directory to all hosts.
    [root]# scp -r /opt/nitro root@host002:/opt

    If you are not using a shared file system, you may not be able to use the nitrosub command.

2.9.3 Verify Network Communication

Verify that the nodes that will be running Nitro are able to communicate with the Nitro ports and that the nodes are able to communicate with one another.

© 2017 Adaptive Computing