OpenStack Integration Installation & Configuration

This chapter describes the required installations and configurations required to bring up the KumoScale software. It is assumed that OpenStack is already installed according to the settings above.

Environment Requirements

  1. Verify that your environment includes the following:
    1. A controller node.
    2. A compute node.
    3. A network node (optional).
    4. All nodes run CentOS™ 8.4 and Ubuntu™ 20.04.
  2. Verify that you have the following packages installed:
    1. Controller node:
      1. python-pip.
      2. prov_rest_client.
    2. Compute node:
      1. nvme-cli.
      2. mdadm.
      3. dmidecode.
      4. util-linux.
  3. Verify that OpenStack platform version Xena is installed.
  4. Verify that KumoScale software is installed, the KumoScale Management Cluster is configured, and KumoScale storage nodes have been added to the KumoScale Provisioner service.

Installing KumoScale Agent

Execute the following steps to install the KumoScale healing agent:

  1. Download the KIOXIA NVMe-oF Agent:
    The agent suite for OpenStack contains installation scripts for NVMe-oF Agent. The archive file install_kioxia_nvmeof_agent_.tar contains the installation for Cinder host (initiator).
  2. After extracting the relevant tar files as described above, complete the following:
    1. Nova Host Install
      1. Install nvmeof-agent (see its related readme file). Verify that you have installed nvme_cli.
      2. Run the extracted script:
        #>./install_kioxia_nvmeof_agent_.sh
      3. You will be asked to provide the value of OS_BRICK_PATH. For example, you will be prompted with:
        1. OS_BRICK_PATH directory
          1. For Ubuntu 20.04:
            /usr/lib/python3/dist-packages
          2. For CentOS:
            /usr/lib/python3.6/site-packages
    2. Nova Host Uninstall
      1. For RPM-based installations:
        1. sudo rpm -evnvmeof-agent-${RELEASE}-1.x86_64
      2. For Debian: 
        1. sudo dpkg -r nvmeof-agent

Configuring KumoScale Software in an OpenStack Environment

Configuring the KumoScale Provisioner

Follow the steps below to set configure the KumoScale Provisioner using the file provisioner.conf.

  1. Provide values for prov_ip, token, cert_file. You will need to add the ' symbol before and after their values.
  2. For OpenIDC authentication mode, provide values for client_id, client_secret, token_url. In any other case, specify None. These values are case sensitive so confirm that you use None rather than none or NONE.

Details on these parameters are found in KumoScale Configuration Options.

Below is an example of the contents of a valid provisioner.conf file with LOCAL authorization mode:

prov_ip='###.##.#.#'

prov_port=30100    

token='********************************************************************************************************************************************'   

cert_file='/etc/kioxia/ssdtoolbox.pem'

client_id=None

client_secret=None    

token_url=None

Configuring Cinder for use with KumoScale

This section explains how to configure Cinder for use with the KIOXIA KumoScale storage node. The following operations are supported:

  • Create, list, delete, attach, and detach volumes
  • Create, list, and delete volume snapshots
  • Create a volume from a snapshot
  • Copy an image to a volume
  • Copy a volume to an image
  • Create volume from snapshot
  • Clone a volume
  • Extend a volume

KumoScale Configuration Options lists the parameters that may be used to complete the above actions.

KumoScale Configuration Options

The following table contains the configuration options supported by the KIOXIA KumoScale
NVMe-oF driver.

Table 1. Description of KIOXIA KumoScale configuration options

Configuration option = Default value

Description

kioxia_block_size = 4096

(Integer) Volume block size in bytes - 512 or 4096 (Default).

kioxia_cafile = None

(String) Cert for provisioner REST API SSL

kioxia_client_id

Client ID of a client, which has a service account role of ADMIN. Relevant only for OpenIDC authentication mode

kioxia_client_secret

The client secret. Relevant only for OpenIDC authentication mode

kioxia_desired_bw_per_gb = 0

(Integer) Desired bandwidth in B/s per GB.

kioxia_desired_iops_per_gb = 0

(Integer) Desired IOPS/GB.

kioxia_max_bw_per_gb = 0

(Integer) Upper limit for bandwidth in B/s per GB.

kioxia_max_iops_per_gb = 0

(Integer) Upper limit for IOPS/GB.

kioxia_max_replica_down_time = 0

(Integer) Replicated volume max downtime for replica in minutes. Default of 0 indicates forever. Otherwise, must be a value between 5 and 1440 (24 hours).

kioxia_num_replicas = 1

(Integer) Number of volume replicas.

kioxia_provisioning_type = THICK

(String(choices=[‘THICK’, ‘THIN’])) Specify whether to use a thin or thick volume.

kioxia_same_rack_allowed = False

(Boolean) Can more than one replica be allocated to same rack.

kioxia_snap_reserved_space_percentage = 0

(Integer) Percentage of the parent volume to be used for log.

kioxia_snap_vol_reserved_space_percentage = 0

(Integer) Writable snapshot percentage of parent volume used for log.

kioxia_snap_vol_span_allowed = True

(Boolean) Allow span in snapshot volume - Default True.

kioxia_span_allowed = True

(Boolean) Allow span - Default True.

kioxia_token = None

(String) KumoScale Provisioner authorization token. Relevant only for OpenIDC authentication mode

kioxia_url = None

(String) KumoScale Provisioner full URL. A valid URL must be provided.

kioxia_vol_reserved_space_percentage = 0

(Integer) Thin volume reserved capacity[1] allocation percentage.

kioxia_writable = False

(Boolean) Specify whether snapshot volumes from this class are writable or not.

volume_backend_name

The name of the KumoScale backend (storage node) to be identified by Cinder

volume_driver

Full path and name of the KumoScale Volume Driver

 

To configure KumoScale software in an OpenStack environment,

  1. Edit the file /etc/cinder/cinder.conf. Under the [DEFAULT] section, set the value of the enabled_backends parameter. For example:
    1. [DEFAULT]

      enabled_backends = kumoscale-1
  2. Add a section for the backend (storage node) group specified in the enabled_backends parameter.
  3. In the newly created backend (storage node) group section, set the following configuration options:
    1. [kumoscale-1]
      # Backend name
      volume_backend_name=kumoscale-1
      # The driver path
      volume_driver=cinder.volume.drivers.kioxia.kumoscale.KumoScaleBaseVolumeDriver
      # Kumoscale provisioner URL
      kioxia_url=https://##.##.##.##:30100
      # Kumoscale provisioner cert file
      kioxia_cafile=/etc/kioxia/ssdtoolbox.pem
      # Kumoscale provisioner token
      kioxia_token=############################
  4. Restart the service cinder/nova and verify that the ssdtoolbox.pem file exists in /etc/kioxia/
  5. Run the following in the controller system to create a new type:
    1. openstack volume type create kumoscale-1
  6. Run the following in the controller system:
    1. openstack volume type set –property volume_backend_name=kumoscale-1 kumoscale-1

Live Migration Support

Note - It is advised to use physical servers as compute nodes, since if one or more of the compute nodes are VMs, the result of the live migration may be unknown and cause the cluster to crash.

The following instructions are for setups with QEMU™ or KVM hypervisors. Different setups may require different configurations according to the hypervisor.

  1. Configure live migration according to the OpenStack platform user guide available at
    https://docs.openstack.org/nova/xena/admin/configuring-migrations.html#section-configuring-compute-migrations
  2. If the initiator is a VMware ESX® hypervisor, enable "Expose hardware assisted virtualization to the guest OS" in the CPU settings.
  3. Verify the instance you intend to migrate has only volumes (including the OS).
  4. Edit the configuration file /etc/nova/nova.conf. Add the following entry to the [DEFAULT] section:
    1. live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
  5. If the target host and source have different CPU features, set the following in [libvirt] section:
    1. cpu_mode=custom 
      cpu_model=[baseline-model]
  6. Compute the baseline-model according to the instructions in the following page:
    https://docs.fedoraproject.org/en-US/Fedora/18/html/Virtualization_Administration_Guide/ch15s13s03.html
  7. Restart Nova.
  8. For Ubuntu, add the following line to /etc/neutron/plugins/ml2/linuxbridge_agent.ini: on both servers:
    1. [linux_bridge] 
      physical_interface_mappings = physnet1:ens801f0in
  9. Configure libvirtd on both servers.
    The following example configuration is a simple configuration for libvirtd↔libvirtd communication over TCP with no authentication. On both servers:
    1. Remove the remark symbol from the following line in /etc/sysconfig/libvirtd:
      1. LIBVIRTD_ARGS="--listen"
    2. For Ubuntu, add the following line to /etc/default/libvirtd:
      1. libvirtd_opts="-l"
    3. Remove the remark symbol from the following lines in /etc/libvirt/libvirtd.conf:
      1. listen_tls = 0 
        listen_tcp = 1
    4. Remove the remark symbol from the following line in /etc/libvirt/libvirtd.conf and set its value:
      1. auth_tcp = "none"
  10. Change nova.conf as follows.
    1. Add:
      1. live_migration_uri=qemu+ssh://root@%s/system
    2. Use virsh to check libirtd connectivity from both systems, for example:
      1. virsh -c qemu+ssh://root@blade-1/system
    3. Add the following:
      1. [service_user] 
        send_service_user_token = True
        auth_type = password
        project_domain_name = Default
        project_name = service
        user_domain_name = Default
        password = servicepassword
        username = nova
        auth_url = http://172.28.30.23:5000
  11. Restart nova-compute:
    1. systemctl restart nova-*
  12. Restart libvirtd.

[1] Definition of capacity - KIOXIA Corporation defines a megabyte (MB) as 1,000,000 bytes, a gigabyte (GB) as 1,000,000,000 bytes and a terabyte (TB) as 1,000,000,000,000 bytes. A computer operating system, however, reports storage capacity using powers of 2 for the definition of 1Gbit = 230 bits = 1,073,741,824 bits, 1GB = 230 bytes = 1,073,741,824 bytes and 1TB = 240 bytes = 1,099,511,627,776 bytes and therefore shows less storage capacity. Available storage capacity (including examples of various media files) will vary based on file size, formatting, settings, software and operating system, and/or pre-installed software applications, or media content. Actual formatted capacity may vary.

 

Next: OpenStack Storage Provisioning