Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

Hardware and Software Requirements

This page describes some hardware and software requirements or recommendations in order to run the HPCC. Essentially the HPCC system is designed to run on commodity hardware, and would probably work well on almost any hardware. To really take advantage of the power of an HPCC system, you should deploy your HPCC system on more modern advanced hardware.

Hardware and software technology are constantly changing and improving, therefore the latest most up to date requirements and recommendation are available on the HPCC Systems Portal. The System requirements page describes in detail the latest platform requirements.

Network Switch

The network switch is a significant component of the HPCC System.

Switch requirements

  • Sufficient number of ports to allow all nodes to be connected directly to it;

  • IGMP v.2 support 

  • IGMP snooping support

Ideally your HPCC system will perform better when each node is connected directly into a single switch. You should be able to provide a port for each node on a single switch to optimize system performance. Your switch size should correspond to the size of your system. You would want to ensure that the switch you use has enough capacity for each node to be plugged into it's own port.

Switch additional recommended features

  • Gigabit speed

  • Non-blocking/Non-oversubscribed backplane

  • Low latency (under 35usec)

  • Layer 3 switching

  • Managed and monitored (SNMP is a plus)

  • Port channel (port bundling) support

Generally, higher-end, higher throughput switches are also going to provide better performance. For larger systems, a high-capacity managed switch that can be configured and tuned for HPCC efficiency is the best choice.

Load Balancer

A load balancer distributes network traffic across a number of servers. Each Roxie Node is capable of receiving requests and returning results. Therefore, a load balancer distributes the load in an efficient manner to get the best performance and avoid a potential bottleneck.

Load Balancer Requirements

Minimum requirements

  • Throughput: 1 Gigabit

  • Ethernet ports: 2

  • Balancing Strategy: Round Robin

Standard requirements

  • Throughput: 8Gbps

  • Gigabit Ethernet ports: 4

  • Balancing Strategy: Flexible (F5 iRules or equivalent)

Recommended capabilities

  • Ability to provide cyclic load rotation (not load balancing).

  • Ability to forward SOAP/HTTP traffic

  • Ability to provide triangulation/n-path routing (traffic incoming through the load balancer to the node, replies sent out the via the switch).

  • Ability to treat a cluster of nodes as a single entity (for load balancing clusters not nodes)


  • Ability to stack or tier the load balancers for multiple levels if not.


An HPCC System can run as a single node system or a multi node system.

These hardware recommendations are intended for a multi-node production system. A test system can use less stringent specifications. Also, while it is easier to manage a system where all nodes are identical, this is not required. However, it is important to note that your system will only run as fast as its slowest node.

Node minimum requirements

  • Pentium 4 or newer CPU

  • 32-bit

  • 1GB RAM per slave

    (Note: If you configure more than 1 slave per node, memory is shared. For example, if you want 2 slaves per node with each having 4 GB of memory, the server would need 8 GB total.)

  • One Hard Drive (with sufficient free space to handle the size of the data you plan to process) or Network Attached Storage.

  • 1 GigE network interface

Node recommended specifications

  • Dual Core i7 CPU (or better)

  • 64-bit

  • 4 GB RAM (or more) per slave

  • 1 GigE network interface

  • PXE boot support in BIOS

    PXE boot support is recommended so you can manage OS, packages, and other settings when you have a large system

  • Optionally IPMI and KVM over IP support

    For Roxie nodes:

  • Two 10K RPM (or faster) SAS Hard Drives

    Typically, drive speed is the priority for Roxie nodes

    For Thor nodes:

  • Two 7200K RPM (or faster) SATA Hard Drives (Thor)

  • Optionally 3 or more hard drives can be configured in a RAID 5 container for increased performance and availability

    Typically, drive capacity is the priority for Thor nodes


All nodes must have the identical operating systems. We recommend all nodes have identical BIOS settings, and packages installed. This significantly reduces variables when troubleshooting. It is easier to manage a system where all nodes are identical, but this is not required.

Operating System Requirements

Binary installation packages are available for many Linux Operating systems. HPCC System platform requirements are readily available on the HPCC Portal.

  • No labels