You are here: Planning Your Installation > Server Hardware Requirements

1.1 Server Hardware Requirements

The Moab is installed and configured differently for small, medium, or large environment types. This topic provides a general topology of the Moab HPC Suite and the server hardware requirements depending on your environment size.

In this topic:

1.1.1 Topology

The following diagram provides a general topology of the Moab HPC Suite for a medium (with high throughput) or a large environment.

Click to enlarge

Please note the following:

Software components that may be included in a Moab HPC Suite installation are described in the table below.

Component Description
Moab Workload Manager A scheduling and management system designed for clusters and grids.
Moab Elastic Computing Manages resource expansion and contraction of bursty workloads utilizing additional resources from private clouds or other data centers.
Torque Resource Manager - PBS Server A resource manager for Moab. Torque provides the low-level functionality to discover and report cluster resources/features, and to start, hold, cancel, and monitor jobs. Required by Moab Workload Manager.
Torque Resource Manager - PBS MOM Torque MOMs are agents installed on each compute node that complete tasks assigned to them by the Torque Server. When a multi-node job runs, one of the Torque MOMs is assigned the role of Mother Superior and all other nodes assigned to the job are sister nodes. Mother Superior manages the job across all the sister nodes by communicating with each of them and updating the Torque Server. Required by Torque.
Moab Passthrough Enables job submission and monitoring with Slurm.
Slurmd The compute node daemon of Slurm. It monitors all tasks running on the compute node, accepts work (tasks), launches tasks, and kills running tasks upon request. The Automated Installer does not install slurmd at this time. Slurmd is assumed to already be installed.
Moab Accounting Manager An accounting management system that allows for usage tracking, charge accounting, and allocation enforcements for resource usage in technical computing environments. Required by Moab Workload Manager and Moab Web Services.
Moab Web Services (MWS) A component of the Moab HPC Suite that enables programmatic interaction with Moab Workload Manager via a RESTful interface. MWS lets you create and interact with Moab objects and properties such as jobs, nodes, virtual machines, and reservations. MWS is the preferred method for those wishing to create custom user interfaces for Moab and is the primary method by which Moab Viewpoint communicates with Moab. Required by Moab Viewpoint.
Reprise License Manager Server (RLM) A flexible and easy-to-use license manager with the power to serve enterprise users. Required by Moab Elastic Computing, Nitro, and Remote Visualization.
Moab Insight A component of the Moab HPC Suite that collects the data that Moab emits on its message queue and stores it in a database. The message queue is efficient, can be encrypted, and tolerates disconnections and restarts on either side. Required by Moab Viewpoint and Kafka Master.
Nitro A highly powerful, yet simple task launching solution which operates as an independent product but can also integrate seamlessly with any HPC scheduler. In the Moab HPC Suite, Nitro is fully integrated with Viewpoint for seamless high-throughput job submission and monitoring.
Nitro Web Services Enables programmatic interaction with Nitro for obtaining Nitro job status information within Viewpoint. Required by Moab Viewpoint.
Moab Viewpoint A rich, easy-to-use portal for end-users and administrators, designed to increase productivity through its visual web-based interface, powerful job management features, and other workload functions. The portal provides greater self-sufficiency for end-users while reducing administrator overhead in High Performance Computing. Nitro, Remote Visualization, Elastic Computing, Moab Passthrough, and Reporting and Analytics features are also licensable for use with Viewpoint. Required by Remote Visualization.
Remote Visualization Gateway Manages Remote Visualization sessions on the Remote Visualization Session servers. Remote Visualization is an extension of Viewpoint. Required by Viewpoint and Remote Visualization.
Remote Visualization Session Remote Visualization sessions provide access into remote applications, rendering remotely and transfering the pixels to the local browser. Required by Viewpoint and Remote Visualization Gateway.
Reporting Web Services (RWS) A component of Adaptive Computing Suites that enables programmatic interaction with Moab Reporting and Analytics via a RESTful interface. RWS is the preferred method for those wishing to create custom user interfaces for Moab Reporting and Analytics and is the primary method by which Moab Viewpoint communicates with Moab Reporting and Analytics.
Reporting and Analytics Streams in massive amounts of workload and resource usage data from your High Performance Computing (HPC), High Throughput Computing (HTC) and Grid Computing environments, and then correlates that information against users, groups, and accounts, organizations so you can gain insights into exactly how your investment is being used and how well it aligns with your goals.
MongoDB A free and open-source cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schemas. Required by Moab Workload Manager, Moab Passthrough, Moab Web Services, Nitro Web Services, Reporting Web Services, and Spark Worker.
PostgreSQL An object-relational database (ORDBMS) – i.e. an RDBMS, with additional (optional use) object features – with an emphasis on extensibility and standards compliance. Required by Moab Workload Manager, Moab Passthrough, Moab Accounting Manager, Moab Web Services, and Moab Viewpoint.
Drill Apache Drill is an open-source software framework that supports data-intensive distributed applications for interactive analysis of large-scale datasets. Required by Reporting Web Services.
Hadoop The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. Required by Spark Worker.
Spark Master Apache Spark is a fast and general engine for large-scale data processing. Spark Streaming is an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams. The Spark Master uses one or more Spark Workers when processing live data streams. Data can be ingested from many sources like Kafka, Flume, Kinesis, or TCP sockets, and can be processed using complex algorithms expressed with high-level functions like map, reduce, join and window. Finally, processed data can be pushed out to filesystems, databases, and live dashboards. Required by Reporting Web Services.
Spark Worker The Spark Worker is used by a Spark Master when processing live data streams. Required by Spark Master.
Kafka Master Apache Kafka is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies. Kafka Master uses one or more Kafka Brokers when pipelining and processing live data streams. Required by Spark Worker, and Insight.
Kafka Broker Kafka Broker is used by a Kafka Master to pipeline and process live data streams. Apache Kafka is used for building real-time data pipelines and streaming apps. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies. Required by Kafka Master.

1.1.2 Hardware Requirements

The following tables show hardware requirements for Moab, Torque, and Reporting Framework environments of various deployment sizes.

1.1.2.A Moab and Torque Requirements

The following table identifies the minimum and recommended hardware requirements for the different environment types. Use this table as a guide when planing out your suite topology.

Software requirements are listed per-component rather than suite-wide as the suite components reside on different hosts. See 1.2 Component Requirements

Environment Type

# of Compute Nodes

Jobs/ Week

Minimum Requirements (per Host Distribution)

Recommended Requirements (targeting minimum number of hosts)

Proof of Concept

/ Small Demo

50

<1k

Moab Server+Torque Server Host

  • 4 Intel/AMD x86-64 cores
  • At least 8 GB RAM
  • At least 100 GB dedicated disk space

Insight Server Host

  • 8 Intel/AMD x86-64 cores
  • At least 16 GB RAM
  • At least 512 GB dedicated disk space

Same as minimum

Medium

500

<100k

Moab Server+Torque Server Host

  • 8 Intel/AMD x86-64 cores
  • At least 16 GB RAM
  • At least 512 GB dedicated disk space

Insight Server Host

  • 8 Intel/AMD x86-64 cores
  • At least 16 GB of RAM
  • At least 1024 GB disk

Moab Server+Torque Server Host

  • 16 Intel/AMD x86-64 cores
  • At least 32 GB RAM
  • At least 1 TB dedicated disk space

Insight Server Host

  • 8 Intel/AMD x86-64 cores
  • At least 16 GB of RAMA dedicated 1 Gbit channel between Insight and Moab
  • 128 GB local SSD for swap
  • At least 1024 GB disk

Medium with High Throughput or Larger

>500

>100k

Moab Server Host

  • 8 Intel/AMD x86-64 cores
  • At least 16 GB RAM
  • At least 512 GB dedicated disk space

Torque Server Host

  • 8 Intel/AMD x86-64 cores
  • At least 16 GB RAM
  • At least 512 GB dedicated disk space

Insight Server Host

  • 8 Intel/AMD x86-64 cores
  • At least 16 GB of RAM
  • At least 2048 GB disk

The Moab Server should not reside on the same host as the Torque Server.

 

MWS Server must reside on the same host as the Moab Server (Moab Server Host).

 

The MAM Server may reside on its own host, on the Moab Host (preferred), or another server's host (except for the Insight Host).

 

The Viewpoint Server may reside on its own host, on the Moab Server Host (preferred), or another server's host (except for the Insight Server Host).

 

Databases may also reside on the same or a different host from its server component.

Please note the following:

1.1.2.B Reporting Framework Requirements

The following table shows hardware requirements for the Reporting and Kafka hosts needed to support the addition of the Reporting Framework to a Moab environment. These requirements are in addition to the requirements shown in the table above.

Environment Type

Minimum Requirements (per Host Distribution)

Recommended Requirements (targeting minimum number of hosts)

Proof of Concept

/ Small Demo

Reporting Master Host

  • 4 Intel/AMD x86-64 cores
  • At least 8 GB RAM
  • At least 512 GB dedicated disk space

Reporting Worker Host

  • 8 Intel/AMD x86-64 cores
  • At least 16 GB RAM
  • At least 512 GB dedicated disk space

Kafka Broker Host

  • 4 Intel/AMD x86-64 cores
  • At least 6 GB RAM
  • At least 512 GB dedicated disk space

Same as minimum

Medium

Reporting Master Host

  • 4 Intel/AMD x86-64 cores
  • At least 8 GB RAM
  • At least 1024 GB dedicated disk space

Reporting Worker Host

  • 8 Intel/AMD x86-64 cores
  • At least 16 GB RAM
  • At least 512 GB dedicated disk space

Kafka Broker Host

  • 4 Intel/AMD x86-64 cores
  • At least 6 GB RAM
  • At least 1024 GB dedicated disk space

Reporting Master Host

  • 4 Intel/AMD x86-64 cores
  • At least 16 GB RAM
  • At least 1024 GB dedicated disk space

Reporting Worker Host

  • 8 Intel/AMD x86-64 cores
  • At least 32 GB RAM
  • At least 512 GB dedicated disk space

Kafka Broker Host

  • 4 Intel/AMD x86-64 cores
  • At least 6 GB RAM
  • At least 1024 GB dedicated disk space

Medium with High Throughput or Larger

Reporting Master Host

  • 4 Intel/AMD x86-64 cores
  • At least 16 GB RAM
  • At least 2048 GB dedicated disk space

Reporting Worker Host

  • 8 Intel/AMD x86-64 cores
  • At least 32 GB RAM
  • At least 512 GB dedicated disk space

Kafka Broker Host

  • 4 Intel/AMD x86-64 cores
  • At least 6 GB RAM
  • At least 2048 GB dedicated disk space
More than one Reporting Worker hosts is recommended.

© 2017 Adaptive Computing