Registering HPE Ezmeral Data Fabric on Kubernetes as Tenant Storage

This procedure describes registering HPE Ezmeral Data Fabric on Kubernetes as Tenant storage.

Prerequisites

NOTE
Please read the complete procedure before you start this registration process.
  • The HPE Ezmeral Runtime Enterprise deployment must not have configured tenant storage. In the HPE Ezmeral Runtime Enterprise (ERE) Web UI, make sure that Tenant Storage is set to None, in Settings screen.
  • Make sure that the HPE Ezmeral Data Fabric on Kubernetes cluster does not have pre-existing Data Fabric volumes named in the tenant-<id> format. For more information, see Administering Volumes. You can also run the following command inside the admincli-0 pod:
    maprcli volume list -columns volumename | grep tenant

    If any Data Fabric volume exists, you can conclude that the Data Fabric cluster is already registered as tenant storage. Contact Hewlett Packard Enterprise Support for technical assistance.

  • An HPE Ezmeral Data Fabric on Kubernetes cluster must have been created. See HPE Ezmeral Data Fabric Documentation for more details on a HPE Ezmeral Data Fabric on Kubernetes cluster.
  • Before proceeding to register HPE Ezmeral Data Fabric on Kubernetes, you must have created the Data Fabric cluster by performing upto Step 5: Summary of the procedure Creating a New Data Fabric Cluster.
  • This procedure must be performed by the user who installed HPE Ezmeral Runtime Enterprise.
  • This procedure may require 10 minutes or more per EPIC or Kubernetes host (Controller, Shadow Controller, Arbiter, Master, Worker, and so on).
  • This procedure must be performed on the primary Controller host.

    If Platform HA is enabled, in the ERE web UI, you can check Controllers page to confirm which controller is set Primary.

    CAUTION
    You will not be able to delete this HPE Ezmeral Data Fabric on Kubernetes cluster after you have completed this step.

About this task

HPE Ezmeral Runtime Enterprise can connect to multiple Data Fabric storage deployments; however, only one Data Fabric deployment can be registered as tenant storage.
  • If you have an HPE Ezmeral Data Fabric on Kubernetes cluster outside the HPE Ezmeral Runtime Enterprise, and if you want to configure HPE Ezmeral Data Fabric on Kubernetes as tenant storage, continue with this procedure.
  • If you have already selected another Data Fabric instance for tenant/persistent storage, do not proceed with this procedure. Contact Hewlett Packard Enterprise Support if you want to use a different Data Fabric instance as tenant storage

This procedure may require 10 minutes or more per EPIC or Kubernetes host [Controller, Shadow Controller, Arbiter, Master, Worker, and so on], as the registration procedure configures and deploys Data Fabric client software on each host.

After Data Fabric registration is completed, the configuration will look as follows:

Procedure

  1. Preparation:
    1. Have the Platform Administrator username and password ready.
    2. Verify that all cluster nodes are up and running, and that the system is not in a degraded state.
    3. Obtain the IP address of a cluster master node by executing the following command:
      bdconfig --getk8shosts

      This command returns a table with information for all nodes; you need the IPADDR value for any node on the relevant cluster that displays K8S_MASTER as True. If the cluster has more than one master node, then you can pick IPADDR from any of the master nodes to be used as the Kubernetes Master Node IP in the next step. You do not need to repeat Step d. multiple times for each Master Node IP.

    4. Execute the information from the HPE Ezmeral Data Fabric on Kubernetes cluster and create a manifest file at /opt/bluedata/tmp/<MASTER_NODE_IP>/dftenant-manifest:
      LOG_FILE_PATH=/tmp/<log_file> MASTER_NODE_IP="<Kubernetes_Master_Node_IP_Address>" /opt/bluedata/bundles/hpe-cp-*/startscript.sh --action prepare_dftenants

      where:

      LOG_FILE_PATH is an optional parameter that can help confirm or troubleshoot functionality. If this is not provided, then the file /tmp/bds<datetime>.log will be created.

      The MASTER_NODE_IP was obtained in Step c. above.

      The contents of the manifest created are:
      CLDB_LIST="<comma-separated;FQDN_or_IP_address_for_each_CLDB_node>"
      CLDB_PORT="<port_number_for_CLDB_service>"
      SECURE="<true_or_false>" (Default is true)
      CLUSTER_NAME="<name_of_DataFabric_cluster>"
      REST_URL="<REST_API_URL_as_hostname:port>"
      EXT_MAPR_MOUNT_DIR="<directory_in_mount_path_for_volumes>"
      (Default is /exthcp)
      TICKET_FILE_LOCATION="<path_to_ticket_for_HCP_admin>"
      SSL_TRUSTSTORE_LOCATION="<path_to_ssl_truststore>"
      HCP_ADMIN_USER="<name_of_HCP_admin_user>" (Default is mapr)
      EXT_SECRETS_FILE_LOCATION="<path_to_external_secrets_file_for_Spark_cluster>"
      FORCE_ERASE="<true_or_false" (Default is true)
      RESTART_CNODE="<true_or_false"> (Default is true)
    The dftenant-manifest is needed for cluster registration (next section).
    1. Proceed to Registration.
  2. Configuration

    Deploy a Data Fabric client on all hosts by executing the following command on the primary controller host:

    LOG_FILE_PATH=/tmp/<log_file> MASTER_NODE_IP="<Kubernetes_Master_Node_IP_Address>" /opt/bluedata/bundles/hpe-cp-*/startscript.sh --action configure_dftenants
    The ext_configure_dftenants action deploys HPE Ezmeral Data Fabric client modules (such as the POSIX Client), on HPE Ezmeral Runtime Enterprise hosts.
  3. Registration

    To complete the registration procedure, initiate the ext_register_dftenants action, using the following command:

    LOG_FILE_PATH=/tmp/<log_file> MASTER_NODE_IP="<Kubernetes_Master_Node_IP_Address>" /opt/bluedata/bundles/hpe-cp-*/startscript.sh --action register_dftenants

    When prompted, enter the Site Administrator username and password. HPE Ezmeral Runtime Enterprise uses this information for REST API access to its management module.

    The results of the register_dftenants action are the following:
    • register_dftenants creates a volume, on the HPE Ezmeral Data Fabric on Kubernetes cluster, for each existing HPE Ezmeral Runtime Enterprise tenant. For a new tenant (created in the future), a tenant volume gets created automatically, on the HPE Ezmeral Data Fabric on Kubernetes cluster. The name of the volume in will be tenant-<ID>, where <ID> is the number of the tenant.
    • The register_dftenants action reconfigures Tenant Storage to use the HPE Ezmeral Data Fabric on Kubernetes cluster, for all future tenants. And:
      • TenantStorage and TenantShare will be created for all existing tenants on the Data Fabric cluster.
      • For AI/ML tenants, the project repository will be changed to use a Data Fabric volume. However, data from the existing project repository will not be migrated.
      • Both TenantShare and TenantStorage will be available for all tenants.
    • The register_dftenants action also reconfigures the following services:
      • Nagios, to track Data Fabric related client and mount services on the appropriate HPE Ezmeral Runtime Enterprise hosts.
      • WebHDFSs, to enable browser-based file system operations, such as upload, mkdir, and so on

    The file systems on the per-tenant volumes on the Data Fabric cluster are mounted, by the Data Fabric client on each node, under /opt/bluedata/mapr/mnt/<cluster_name>/<ext_mapr_mount_dir>/<tenant-id>/ , where:

    • <cluster_name> is the name of the HPE Ezmeral Data Fabric on Kubernetes cluster.
    • <ext_mapr_mount_dir> is specified in the ext-dftenant-manifest. See Step 1.c.
    • <tenant-id> is the unique identifier for the relevant tenant.

    Future Kubernetes clusters created in the HPE Ezmeral Runtime Enterprise will have persistent volumes located in:

    /opt/bluedata/mapr/mnt/<datafabric_cluster_name>/<ext_mapr_mount_dir>/

    The registered HPE Ezmeral Data Fabric on Kubernetes cluster will be the backing for Storage Classes of future Kubernetes Compute clusters, that are created in the HPE Ezmeral Runtime Enterprise The registration procedure does not modify the Storage Classes for Compute clusters, which existed before the registration.

  4. Validation:

    To confirm the success of the Registration, check the following

    1. Check the output and/or logs of the ext_configure_dftenants and ext_register_dftenants actions.
    2. On the HPE Ezmeral Runtime Enterprise Web UI, view the Tenant Storage tab on the System Settings page. Check that the information displayed on the screen is accurate for the HPE Ezmeral Data Fabric on Kubernetes cluster.
    3. On the HPE Ezmeral Runtime Enterprise, view the Kubernetes and EPIC Dashboards, and check that the POSIX Client and Mount Path services on all hosts are in normal state.
    4. On the HPE Ezmeral Runtime Enterprise Web UI, verify that, you are able to browse Tenant Storage on an existing tenant. If wanted, try uploading a file to a directory under Tenant Storage and reading the uploaded file. See Uploading and Downloading Files for more details.
  5. Proceed to Step 7: Fine-Tuning the Cluster of the procedure Creating a New Data Fabric Cluster.