System Requirements

2.1 Hardware

You will need:

  • At least one certified server for the storage node (appliance) - It is recommended to install at least three if your applications and data require resiliency.
  • Three physical servers or virtual machines that can maintain High Availability for the KumoScale Management cluster.
  • An Ansible server for the private management cluster’s installation. If deploying in a bare-bones environment, the Ansible server will be used for configuration and provisioning as well.

The list of certified servers, NICs and SSDs, and minimal requirements for all servers is specified in KumoScale HCL document (‎1.3.1).

2.2 Networking Requirements

2.2.1 RDMA-Enabled Networks

KumoScale supports RDMA as the transport protocol for NVMe-oF. When using RDMA as the transport layer, an RDMA-enabled Ethernet network is required.

The NVMe-oF standard requires a reliable (‘lossless’) network for transporting the encapsulated NVMe commands between the host (‘initiator’) and the storage (‘target’). This translates to the following requirements at the rack level:

  1. The hosts (application servers – the initiator server) must be equipped with an RDMA-capable Ethernet NIC.
  2. The Top-of-Rack switch (ToR) must support certain Ethernet flow control features, which are required for achieving a ‘lossless’ RDMA connection. The switch settings may differ from vendor to vendor. Refer to your technical support representative for help on network configuration.

2.2.2 TCP Networks

KumoScale supports TCP as a transport for NVMe-oF, allowing customers to use existing (legacy) TCP networks with KumoScale. This capability is useful for customers who don’t have RDMA-enabled NICs in the application servers (i.e. initiator nodes), or do not want to modify their existing data center network to support RoCE.

2.2.3 LACP Support

KumoScale supports Link Aggregation Control Protocol (LACP) of the data ports. See LACP Configuration.

The network requirements for LACP are summarized in the following table:

Component / Configuration

Requirement

TOR Switch

Should support bonding via LACP for both TCP or NVMe-oF RoCEv2

Single TOR switch

All ports must be connected to the same switch

Two TOR switches

The switches are connected with MLAG or an equivalent protocol which supports bonding

Port configuration                     

Cannot be configured during an active session, must be configured beforehand.

Supported NICs

RDMA: Any NIC appearing in KumoScale HCL.

TCP: Any NIC with BW ≥ 10GbE can be used, provided its drivers are supported within RHEL/Centos 8.1 Kernel.

Data Ports in an Appliance

Either all or none of the ports of an appliance can be bonded.

Note:

  • Bonding must be done over identical NICs (e.g., the same maximal speed, product, and vendor). When Bonding RDMA ports, they must belong to the same NIC.
  • In order to achieve the optimal network resiliency:
    • In an NVMe-oF TCP environment: it is recommended to bond all of the appliance’s Data Ports.
  • In an NVMe-oF RoCEv2: it is recommended to bond each RNIC’s Data Ports.

2.2.4 VLAN Support

KumoScale supports Virtual Local Area Network (VLAN) tagging for virtualizing different networks.

The following table summarizes the network requirements for VLAN support in KumoScale:

Component / Configuration

Requirement

TOR Switch

Should support VLAN

Supported NICs

RDMA: Any NIC appearing in KumoScale HCL.

TCP: Any NIC with BW ≥ 10GbE can be used, provided its drivers are supported within RHEL/Centos 8.1 Kernel.

2.2.5 Connectivity

  1. The appliance management port will be selected according to the following guidelines: The installation process sorts the appliance’s ports according to maximum speed (from lowest to highest), bus info, and MAC address. It will then choose the first slowest one from this list. You should connect your management port according to these guidelines, or connect it after the installation and configure it.
  2. When installing the appliance via USB without connecting any ports, the installation will not complete until it is connected.
  3. You should have internet access from the Ansible server for installing the Kubernetes management cluster servers to download Kubespray.
  4. The host must have access to both the Management and the KumoScale Data

2.3 Operating Systems, Software and Prerequisites

2.3.1 KumoScale Appliance

  • KumoScale golden image installation is available for platforms configured for UEFI only.

2.3.2 KumoScale Private Management Cluster

  • Each of the intended management servers should run CentOS 8.1.
  • NVMe CLI should be installed
  • The firewall should be disabled
  • The proxy should be disabled
  • The Ansible server for installing the KumoScale private management cluster should run CentOS 8.1.

2.3.3 Kubernetes Deployment

  • Kubernetes version ≥ 1.14. If snapshot support is required: ≥ 1.16 (Kubernetes snapshot is alpha in 1.16).

2.3.4 Private Container Registry

  • Access to a container repository, which can maintain KumoScale containers.

2.3.5 OpenStack

  • Supported OpenStack version: Stein.

2.3.6 Ansible

  • The Ansible client supports the following platforms:
    1. CentOS 7.x / 8.x.
    2. Ubuntu 18.x: Multipath for NVMe must be disabled.
  • Host and client have Python 2.7 or above. Verify by running the following command:
  • python --version
  • Ansible 2.7+ is installed on a machine connected to the Management port.
  • The host should have these packages installed: mdadm, nvme-cli
  • The host must support NVMe-oF TCP or RDMA transport, or both.

2.3.7 NVMe-oF Initiator(s)

Operating System

NVMe-oF RoCEv2 is supported by:

  • Linux® OS with kernel 4.9.64 x86 or newer.
  • CentOS 7.5 or newer.
  • Ubuntu 18.04

NVMe-oF TCP:

            The kernel must support NVMe over TCP.

It is recommended to consult with KIOXIA Technical Support regarding the exact distribution you plan to use.

NVMe CLI

NVMe CLI 1.6 or newer.