Install - Configuration & Deploying Additional Nodes

This section explains how to configure the master and worker nodes of the KumoScale Storage Cluster.

Access the KumoScale Storage Cluster from a Remote Administrative Host

Refer to section Administrative Host with Support for Kubernetes kubectl to review the requirements for a remote administrative host. All post-install actions must be done by logging into the storage cluster from a remote host that supports Kubernetes kubectl. If you are:

  • Logging in to the master node of the KumoScale storage cluster for the first time after installing KumoScale software, you need to first set up the KumoScale Administrator Account, admin_cli as explained in Set up the Administrator Account on the Storage Cluster.
  • Logging into the storage from a remote host for the first time, you will need to add it to the cluster configuration as explained in Set up your Remote Host to Access the KumoScale Storage Cluster.
  • Logging into the KumoScale storage cluster from a remote host that is already configured for access, you need to follow the steps in Log into the KumoScale Storage Cluster from a Known Remote Host.

All of the above actions must be done for the first server to be configured, and to the master of the KumoScale storage cluster. All subsequent access can be done using only the steps in the third bullet (listed above).

Set up the Administrator Account on the Storage Cluster

When you install KumoScale software on the first node, the master of the storage cluster, you will need to set up the KumoScale Administrator account, admin_cli. Follow these steps:

  1. From your remote host enter:
ssh admin_cli@<VIP for the Master>
2. When asked for a password, enter:
admin

   This is the default password so you will need to reset it in the next step.

3. You will be prompted for a new password for admin_cli. Set the password according to operating system requirements.
4. Confirm the new password.
5. You will get a message about re-opening the session.
6. Log back in using the new password.

You are now ready to set up your remote host to access the KumoScale storage cluster as explained in Set up your Remote Host to Access the KumoScale Storage Cluster.

Set up your Remote Host to Access the KumoScale Storage Cluster

Perform the following steps to set up your remote host to access the cluster:

  1. On your remote host, create a folder called .kube to hold configuration information.
    For example:
mkdir -p ~/.kube
2. Log into the master of the KumoScale storage cluster as admin_cli with the following:
ssh admin_cli@<VIP for the Master>
3. You will now be at the CLI> prompt.
4. To proceed with configuration and other activities, generate a token by entering:
generate-token –-name admin –-password admin

5. A token string will be returned allowing you access to KumoScale CLI commands for one (1) hour. You can change the expiration time by using the optional parameter expiration and providing a time in hours. More information on using tokens for authentication is available in the KumoScale User Guide.


6. You may want to collect information on the server for use when configuring nodes:
show-mgmt-ips will return the IP address and name.
For example,

CLI> show-mgmt-ips
Interface ID: 331 Name: kx0

You can use the value of name, kx0, returned above, to get details. For example,

CLI> show-mgmt-ips --name kx0:
Interface ID:         331
Interface Name: kx0
Mode:           DHCP
IP address ID:  332
IP Address:           192.0.2.0
Subnet:              255.255.0.0
7. You need to enable the remote host to configure the Kubernetes cluster and KumoScale storage nodes. This is done using the get-kubeconfig command which is documented in the KumoScale CLI Guide. to display the information and then copy the results into a file:

Enter the command below to display the information

CLI> get-kubeconfig --show-to-screen

Create a new file called config and copy the results from the above into the file.

Verify that you can see information on the KumoScale storage cluster by entering:
kubectl cluster-info

Going forward you can log into the storage cluster using only the steps of Log into the KumoScale Storage Cluster from a Known Remote Host.

Log into the KumoScale Storage Cluster from a Known Remote Host

To log into the KumoScale storage cluster from a remote host that is already known to it, provide the password and token information when prompted as shown in the steps below:

  1. Log into the master of the KumoScale storage cluster as admin_cli with:
ssh admin_cli@<VIP for the Master>
2. You will now be at the CLI> prompt.
3. At the CLI> prompt enter:
generate-token –-name admin –-password admin
4. The log-in allows access to KumoScale CLI commands for one (1) hour. You can change the expiration time by using the optional parameter expiration and providing a time in hours. More information on using tokens for authentication is available in the KumoScale User Manual.

Configure the First Master

You need to set up one server in the KumoScale storage cluster with licensing and configuration details that will be used to deploy the other nodes. This is referred to as the first master. Once completed, this step is not needed for any other masters added to the KumoScale storage cluster, nor for any other storage nodes.

From the remote host, complete the steps below. The sample custom resource files referenced in this section are included with the KumoScale software and are located in the directory operators/ks-config-operator/samples.

1. Set the secret for KumoScale software.
kumoscale-secret.yaml should contain the desired admin password base64 encoded. Password requirements are defined according to the current Linux OS password policy.
a. To encode the admin password, run the command:
echo -n 'YourPassword' | base64
b. Edit kumoscale-secret.yaml and copy the password returned above into the password field as shown below:
apiVersion: v1
kind: Secret
metadata:
   name: kumoscale-secret
   namespace: default
type: Opaque
data:
   password: <password-returned-from-step-1a>
c. Set the secret with:
kubectl create -f kumoscale-secret.yaml

d. The system will return confirmation that the secret file was created.


2. Specify the license key provided to you by KIOXIA.

a. Edit the license Custom Resource, kioxia.com_v1_license_cr.yaml, and replace the value of license with the license key provided by KIOXIA.
b. Save the file then run this command to install the license:
kubectl create -f kumoscale.kioxia.com_v1_license_cr.yaml

installman-fig5

c. This will return a message about the KumoScale Provisioner service license being created similar to the screenshot shown above. You can validate that the KumoScale software license was installed with the following command:
kubectl describe licenses provisioner-license

3. Edit the Master CR,  kumoscale_v1_master_cr. yaml, and set the value of:

a. numberOfMasters to the number of masters you want to be in the KumoScale storage cluster. See Number of Masters for cluster requirements.
Note that you will not be able to configure storage nodes until the same number of servers have been deployed as masters on the KumoScale storage cluster. In particular you will need to select whether you want a node to join only when there is affinity, anti-affinity, or neither. You cannot have both. The default setting is set to neither with the other settings described as follows:
  • affinity: the node meets a certain set of requirements and is located in a specific rack, or it has a particular name.
  • anti-affinity: the node does not meet a certain set of requirements and is NOT located in a specific rack.
b. affinity or anti-affinity to specify any constraints on when a node can be added to the storage cluster as explained in Custom Requirements for the Masters. Both of these entries are ‘commented’ which mean they do not apply. If you want to specify custom constraints you will need to uncomment these values.

For example, the Master CR below specifies that the storage cluster has the following requirements:
  • It has three (3) masters.
  • To join the cluster as a master, the node must have the name: node1, node3, or node9.
  • To join the cluster as a master, the node must not be in the same rack or region as existing masters in the KumoScale storage cluster.
apiVersion: kumoscale.kioxia.com/v1
kind: Master
metadata:
     name: master
spec:
     numberofMasters: 3
     affinity:
           matchExpressions:
                -key: Kubernetes.io/hostname
                operator: In
                values:
                     - node1
                     - node3
                     - node9
     antiAffinity:
           - topologyKey: "topology.kubernetes.io/rack"
          - topologyKey: "topology.kubernetes.io/region"

 

4. Configure the first master node for the cluster with:
kubectl create -f kumoscale_v1_master_cr.yaml

 

You will receive a message that the first master node was configured.

5. You now need to add this as a storage node on the KumoScale storage cluster. Create a new CR to define the node. Edit  kumoscale_v1_storagenode_CR.yaml and save with a new name such as kumoscale_v1_storagenode1_CR.yaml.

6. Your new CR should have new values specific to the server. A complete list of possible parameters is available in the KumoScale User Manual. You should review all the parameters before creating the node. The following parameters are required:
  • name: Host name.
  • initMgmtIp: IP address of the server. If you specified the static IP during installation, you must provide the same address here. Otherwise, provide the KumoScale software IP returned using DHCP.
  • adminSecretName: The name of the secret created in step 1 above.
  • timeSettings: time zone ID.
  • mode: NTP.
  • ntpServer:NTP server FQDN or IP address.
  • topology: This information will be verified against any affinity or anti-affinity information in the Master CR. The server will only join as a master if it meets the requirements.
  • portals: The network ports being used for storage data.
  • transportType: TCP_IP or RoCEv2.

An example CR for a server deployed as a master to the storage cluster is shown below. Note that we do not show the IP addresses and subnet for security reasons. You will need to specify addresses.

apiVersion: kumoscale.kioxia.com/v1
kind: StorageNode
metadata:
name: ks-node1-000c298c715f
spec:
initMgmtIp: ###.##.##.###
adminSecretName: kumoscale-secret
groupName: group1
timeSettings:
   timeZoneID: Asia/Jerusalem
   mode: NTP
   ntpServer: ###.###.###.###
network:
   portals:
     - ip: ###.###.###.###
       name: portal1
       subnet: ###.###.#.#
      interface: kx0
       port: 4420
       transportType: TCP_IP
topology:
   - name: topology.kubernetes.io/rack
     value: "RACK1"
   - name: topology.kubernetes.io/zone
     value: "LAB"

Once the above configuration file has been prepared, run following command:

kubectl create –f kumoscale_v1_storagenode1_CR.yaml
7. You should receive confirmation that the master node was created. In addition you can verify that the node was created as a master with the name provided with any of the commands below.
kubectl get nodes -A -o wide
kubectl get storagenodes -A -o wide
kubectl cluster-info

 

For detailed information on the node enter:

kubectl describe storagenodes

Install and Deploy Remaining Masters

NOTE: In order to proceed with adding nodes, you must confirm that
-The storage node is uninitialized (not already configured) node with all
available SSDs attached.

-You have both a kumoscale secret and a valid license file

As noted earlier, you cannot add nodes as storage nodes until the number of master servers in the KumoScale storage cluster is equal to the value of numberOfMasters specified in the Master CR. Servers will stay in a Not Ready state until all required masters are created.

For example, the screen shot below shows the results of kubectl get nodes -A -o wide when only two (2) masters have been configured for a cluster of three (3), that is numberOfMasters is 3, and when all three (3) masters have been configured.

two-storage-nodes

To install and configure additional masters on the KumoScale storage cluster, follow the steps below for each node to be designated:

1. Complete the steps in Chapter 3 to install the software on each server to be deployed on the KumoScale storage cluster.
2. Edit the storage node CR created in Configure the First Master steps 5 and 6.
For example  kumoscale_v1_storagenode1_CR.yaml with values specific to the server (e.g., name, IP).
Some parameters, such as the adminSecretName will not change. Topology information will be verified against any affinity or anti-affinity information in the Master CR. The server will only join the cluster as a master if it meets the requirements.
3. Deploy the node on the cluster using your storage node CR.
For example:
kubectl create -f kumoscale_v1_storagenode1_cr.yaml

 

Verify KumoScale Storage Cluster Configuration

At any point in the process you can submit any of these commands to get status of services, nodes, or pods.

kubectl get nodes -A -o wide will show several pieces of information including what is running on all nodes. In the image below, KumoScale software has been installed on one node.

get-nodes

kubectl get svc -A -o wide will show which services are running. You should see at least the primary storage cluster, the Provisioner Service, the CSI driver, and control operators (ks-install-operator and ks-config-operator) as shown below.

get-svc

kubectl get pods -A -o wide will similarly show the pods set up for KumoScale software services and includes the KumoScale Provisioner service, control operators, and the CSI driver. You should also see any other cluster pods you may have set up.

NOTE: When adding the second storage node as a master, if you issue the kubectl describe command, the following message may appear Cannot add a master while there are not enough masters pending. This is not an error but only reflects that since the number of master nodes needs be an odd number, the storage node is in a pending state until a third storage node is added as a master. You will see this again with an even numbered master node.

Verifying Individual Services on the KumoScale Storage Cluster

You can see which pods are running with the command:

kubectl get pods -n kumo-services

It can be useful to individually verify that all storage cluster services (Provisioner Service, control operators, the CSI driver) are running.

Provisioner Service: To verify that the Provisioner Service is running, read its details and issue either of the following commands:

kubectl get provisionerservices

kubectl get services -A | grep provisioner

 

CSI driver for Kubernetes orchestration: To verify that the CSI service is running, issue the following command:

kubectl get csiservices

 

Control Operators: To verify both ks-install-operator and ks-config-operator are running, use either:

kubectl get services -A | grep ks-install-operator

or:

kubectl get services -A | grep ks-config-operator

Install Internal Components on Storage Cluster

The following components are used for KumoScale internal analytics and should be installed as part of the storage cluster once all management servers are running. In some cases, you will need to edit the CR file before installing the application. In particular:

  • Replace the value of $VIP (in externalIPs: ["$VIP"]) with the storage cluster VIP.
  • Replace the value of replicaCount or replicas to the number of servers in the storage cluster. It will ensure that the services are replicated on all the servers in the storage cluster to provide fail-over[1].
For example, for a single node cluster with one master, use replicas = 1 to get all services available. For a deployment with a 3-server cluster, use replicas = 3; the applications will scale as you add nodes.

 

Prometheus™ Time Series Database (TSDB): Install the Prometheus monitoring and alerting application by first editing the CR file. This service includes an example Grafana™ dashboard for visualizing the Prometheus data. To configure the application and dashboard, see the Table below, Prometheus/Grafana Installation Parameters.

NOTE: The initial Grafana credentials are: admin/ksAdmin

Table. Prometheus/Grafana Installation Parameters

prometheus Stack Parameter Name

Description

Optional/Required

retention

The amount of time to retain metrics. Possible values are [0-9]+(ms|s|m|h|d|w|y).
Default 1y.

Optional

replicas

The number of replicas for data collection. For single node clusters replicas should be 1. For other installations replicas should be 3.

Default value is 3.

Optional

storageClassName

The storage class of the volume.

Default value is kumoscale-local-storage.

Optional

storage

The size of storage for Prometheus service. Default value is 40Gi.

Optional

alertManager:
enabled

Whether the alertManager is enabled. Possible values are true (enabled) or false (not enabled).

Default value is true.

Optional

alertManager: retention

The amount of time to retain data. Possible values are [0-9]+(ms|s|m|h).

Default value is 8760h.

Optional

alertManager:
storageClassName

The storage class of the volume.

Default value is kumoscale-local-storage.

Optional

alertManager:
storage

The size of storage for the alertManager service.

Default value is 40Gi.

Optional

prometheus-node-exporter: enabled

Whether the Prometheus node exporter is enabled. Possible values are true (enabled) or false (not enabled).

Default value is true.

Optional

kube-state-metrics

Whether the Kubernetes kube-state-metrics service is enabled. Possible values are true (enabled) or false (not enabled).

Default value is true.

Optional

grafana:
externalIPs

An IP for the Grafana web interface. A VIP is recommended.

Required

grafana:
persistence.enabled

Enable grafana persistence for persistent password and data sources.

Default value is true.

Optional

grafana:
storageClassName

The storage class of the volume.

Default value is kumoscale-local-storage.

Optional

grafana:
storage

The volume size of storage for grafana service.

Default value is 1Gi.

Optional

Then issue the following command:

kubectl create –f prometheus.kumoscale.kioxia.com_v1_prometheusservice_cr.yaml

 

Fluentd Data Collector: Install the Fluentd data collector by first editing the CR file, fluentd.kumoscale.kioxia.com_v1_fluentd_cr.yaml using the Table below, Fluentd Installation Parameters.

 

Table. Fluentd Installation Parameters

fluentd
Parameter Name

Description

Optional/Required

clusterIP

The value of cluster_vip provided during KumoScale installation

Required

name

Port name

Optional

Protocol

Syslog protocol

Optional

containerPort

Container port

Optional

Then issue the following command:

kubectl create –f fluentd.kumoscale.kioxia.com_v1_fluentd_cr.yaml

 

Loki Log Aggregation System: Edit the file loki.kumoscale.kioxia.com_v1_loki_cr.yaml using the Table below, Loki Installation Parameters.

 

Table.  Loki Installation Parameters

Loki
Parameter Name

Description

Optional/Required

size

The size of the volume that saves the logs.

Default value is 100Gi.

Optional

storageClassName

The storage class of the volume.
Default value is kumoscale-local-storage

This has the protocol:Local and provisioningType:”thin”.

Optional

Install the Loki log aggregation system by issuing the following command:

kubectl create –f loki.kumoscale.kioxia.com_v1_loki_cr.yaml

 

Syslog: Complete details on Syslog is provided in the KumoScale User Manual. In summary, to configure Syslog, edit the CR file as appropriate for your environment. You will need to specify the name and URL. You can also specify other parameters such as whether the Syslog uses TLS/SSL. In this case you will need to provide the certificate file. You will also need to see that syslog-secret.yaml contains the Syslog certificate base64 encoded. To do this, create a Syslog cert secret from the certificate file with:

kubectl create secret generic syslog-secret --from-file=cert=<path to cert file>

Set a cert secret for Syslog with:

kubectl create -f syslog-secret.yaml

To add a new Syslog:

kubectl create -f config/crd/bases/kumoscale.kioxia.com_syslogs.yaml

Verify Internal Tools Configuration

You can see what tools are running by using kubectl get pods -A -o wide and can see the replicas that are created automatically based on the value of replicas specified.

Install and Configure Storage Nodes

NOTE 1.  Storage nodes are configured by creating nodes from customizations you make to the Storage Node CR. The KumoScale User Manual describes the many parameters that can be specified so that you can create your own templates for different types of storage nodes. You should review the possibilities before creating the nodes as indicated in Step (2) below.

NOTE 2.  It is very important that all components in a cluster are synchronized. Some sites may use external worldwide NTP servers while others might use an internal NTP server. We strongly recommend that you validate cluster wide timing synchronization between all components to reduce the risk of issues related to nodes probe timers

 

To install and configure KumoScale software on each server designated for storage, complete the following steps:

  1. Install KumoScale software on each server and prepare for configuration as documented in the Installation Overview.
  2. Create the CR for your storage node using  kumoscale_v1_storagenode_CR.yaml as a template. Your new CR, we will refer to it as kumoscale_v1_storagenode2_cr.yaml, should have values specific to the node such as name, IP, and network information. A description of all possible settings with examples is given in the KumoScale User Manual. You may defer creating CRs and deploying storage nodes as workers until you have fully explored all possible settings for the nodes.
  3. Deploy the node using the settings specified in kumoscale_v1_storagenode2_cr.yaml with:
kubectl create -f kumoscale_v1_storagenode2.yaml

Next Steps

Now that you have installed and configured the masters and other storage nodes, you are ready to explore additional features of KumoScale software documented in the KumoScale User Manual. The manual will explain customizations to your deployment, logging and monitoring tools, and support of the KumoScale software solution.

 

Depending on your environment you may also need to reference one or more of these documents available at KumoScale software version 3.20 documentation.

  • KumoScale Software Ansible Module User Guide: Installation guide and user manual for the KumoScale software Ansible™ modules and playbooks for bare-metal environments, and includes a cross-domain resiliency solution using the KumoScale Provisioner service.
  • KumoScale CLI Guide: User manual and guide for KumoScale CLI.
  • KumoScale CSI Driver User Guide: Installation guide and user manual for the KumoScale CSI driver used for Kubernetes orchestration.
  • KumoScale Software OpenStack Platform User Guide: User manual and installation of KumoScale software components for integration with OpenStack™ environments.
  • KumoScale REST API: KumoScale software REST APIs for Storage Nodes and the Provisioner..

 

[1] No fail-over is available if there is only one server in the storage cluster.

 

Next: Install: Troubleshooting