Registering HPE Ezmeral Data Fabric on Bare Metal as Tenant Storage

This procedure describes registering HPE Ezmeral Data Fabric on Bare Metal as Tenant Storage. An HPE Ezmeral Data Fabric on Bare Metal cluster is external to the HPE Ezmeral Runtime Enterprise installation. After you have installed or upgraded to HPE Ezmeral Runtime Enterprise 5.5.0 or later, you can register the same HPE Ezmeral Data Fabric on Bare Metal cluster as Tenant Storage by multiple HPE Ezmeral Runtime Enterprise instances.

Prerequisites

NOTE
You must read all sections before proceeding to perform the procedure.
  • The user who performs this procedure must have Platform Administrator access to HPE Ezmeral Runtime Enterprise.
  • Activity must be quiesced on the relevant clusters in the HPE Ezmeral instance.
  • The HPE Ezmeral Runtime Enterprise deployment must not have configured tenant storage.
  • An HPE Ezmeral Data Fabric on Bare Metal cluster must have been deployed. See HPE Ezmeral Data Fabric Documentation for more details on a HPE Ezmeral Data Fabric on Bare Metal cluster.
  • When deploying the Data Fabric on Bare Metal cluster:
    • Keep the UID for the mapr user at the default of 5000.
    • Keep the GID for the mapr group at the default of 5000.
    • The Data Fabric (DF) cluster on Bare Metal must be a SECURE cluster.
    • Data At Rest Encryption (DARE) must have been enabled on the DF cluster on Bare metal. If deploying a new DF cluster on Bare metal, enable DARE during the installation. To enable DARE on an existing Data Fabric cluster on Bare metal, see Enabling Encryption of Data at Rest.
    • For compatibility information, see Support Matrixes.
  • Data Fabric volumes which match per-tenant volume names, must not exist on the Data Fabric on Bare Metal cluster. For more information, see Administering volumes
  • For Data Fabric clusters version 7.4.x only, you must apply patch equal to or newer than 20240402, which contains the fix for the bug MFS-17055. This update changes the expiration of service ticket from two weeks to LIFETIME.

About this task


Data Fabric Registration of External Bare Metal Cluster
An HPE Ezmeral Runtime Enterprise can connect to multiple Data Fabric storage deployments; however, only one Data Fabric deployment can be registered as tenant storage.
  • If you have an HPE Ezmeral Data Fabric on Bare Metal cluster outside the HPE Ezmeral Runtime Enterprise, and if you want to configure HPE Ezmeral Data Fabric on Bare Metal as tenant storage, continue with this procedure.
  • If you have already registered another Data Fabric instance as tenant/persistent storage, do not proceed with this procedure. Contact Hewlett Packard Enterprise Support if you want to use a different Data Fabric instance as tenant storage.
NOTE
After you have installed or upgraded to HPE Ezmeral Runtime Enterprise 5.5.0 or later:
  • It is no longer necessary to dedicate an HPE Ezmeral Data Fabric on Bare Metal cluster to one HPE Ezmeral Runtime Enterprise installation.
  • Multiple HPE Ezmeral Runtime Enterprise installations may register the same HPE Ezmeral Data Fabric on Bare Metal cluster as the backing for their tenant storage.
  • On each HPE Ezmeral Runtime Enterprise installation, all tenants will have their tenant storage backed by the same registered HPE Ezmeral Data Fabric on Bare Metal cluster.

The Registration procedure described herein must be run on each HPE Ezmeral Runtime Enterprise installation.

This procedure may require 10 minutes or more per EPIC or Kubernetes host [Controller, Shadow Controller, Arbiter, Master, Worker, and so on], as the registration procedure configures and deploys Data Fabric client software on each host.

After Data Fabric registration is completed, the configuration will look as follows:


HPE Ezmeral Data Fabric Configuration after Registration
The following image shows an example of a configuration in which multiple HPE Ezmeral Runtime Enterprise installations have registered the same Bare Metal Data Fabric cluster as their tenant storage.
An example of Multiple HPE Ezmeral Runtime Enterprise installations that have registered the same cluster as their tenant storage
Registration Steps - A Short Summary:
This section provides a quick reference for the steps required for registration. For detailed instructions, refer to the Procedure section:
  • Log in as mapr user, to a node of the HPE Ezmeral Data Fabric on Bare Metal cluster, on which the CLDB and Apiserver services are running, and:
    • mkdir <working-dir-on-bm-df>/
    .
  • On the Primary Controller of HPE Ezmeral Runtime Enterprise installation, do the following:
    • scp /opt/bluedata/common-install/scripts/mapr/gen-external-secrets.sh mapr@<cldb_node_ip_address>:<working-dir-on-bm-df>/
    • scp /opt/bluedata/common-install/scripts/mapr/prepare-bm-tenants.sh mapr@<cldb_node_ip_address>:<working-dir-on-bm-df>/
    • mkdir /opt/bluedata/tmp/ext-bm-mapr/
  • Create a user-defined manifest for the procedure:
    • If you are not specifying any keys (i.e. to generate default values for all keys):
      touch /opt/bluedata/tmp/ext-bm-mapr/ext-dftenant-manifest.user-defined
    • Else, specify the following parameters:
      • cat << EOF > /opt/bluedata/tmp/ext-bm-mapr/ext-dftenant-manifest.user-defined
        EXT_MAPR_MOUNT_DIR="/<user_specified_directory_in_mount_path_for_volumes>"
        TENANT_VOLUME_NAME_TAG="<user_defined_tag_to_be_included_in_tenant_volume_names>"
        EOF
        
  • On the CLDB node of the HPE Ezmeral Data Fabric on Bare Metal cluster:
    • cd <working-path-on-bm-df>/
    • ./prepare-bm-tenants.sh
  • On the Primary Controller of HPE Ezmeral Runtime Enterprise:
    • Move or remove any existing bm-info-*.tar from /opt/bluedata/tmp/ext-bm-mapr/
    • scp mapr@<cldb_node_ip_address>:< working-dir-on-bm-df>/bm-info-*.tar /opt/bluedata/tmp/ext-bm-mapr/
    • cd /opt/bluedata/tmp/ext-bm-mapr/
    • LOG_FILE_PATH=<log_file_path> /opt/bluedata/bundles/hpe-cp-*/startscript.sh --action ext-bm-df-registration

Procedure

  1. Preparation (On HPE Ezmeral Data Fabric on Bare Metal Cluster):
    1. Verify that the HPE Ezmeral Data Fabric on Bare Metal cluster is in good state.
    2. Before starting the Registration procedure, on the HPE Ezmeral Runtime Enterprise Primary Controller , make sure that the prepare-bm-tenants is run already on the required HPE Ezmeral Data Fabric on Bare Metal cluster. The prepare-bm-tenants and gen-external-secrets.sh scripts are available on the Ezmeral Primary Controller, under opt/bluedata/common-install/scripts/mapr/ and may be copied to the external HPE Ezmeral Data Fabric on Bare Metal cluster.
    3. NOTE
      You can run prepare-bm-tenants on the HPE Ezmeral Data Fabric on Bare Metal cluster on behalf of a single HPE Ezmeral Runtime Enterprise instance, or multiple HPE Ezmeral Runtime Enterprise instances simultaneously.
      To run the prepare-bm-tenants script, do the following:
      1. With Administrator credentials (such as the mapr user), log in to a node of the external HPE Ezmeral Data Fabric on Bare Metal cluster, on which the CLDB and Apiserver services are running.
      2. Copy the prepare-bm-tenants.sh and gen-external-secrets.sh scripts to a CLDB node of the external HPE Ezmeral Data Fabric on Bare Metal cluster, placing both scripts in the same working directory.
      3. Ensure the prepare-bm-tenants.sh file has executable permission and execute the script.
        Upon successful execution of the prepare-bm-tenants.sh script:
        • A file named bm-info-<8_byte_uuid>.tar is created in the same directory (A uuid is generated during each run of the prepare-bm-tenants step).
        • The bm-info-<8_byte_uuid>.tar file contains information on the Data Fabric cluster and other results of the prepare-bm-tenants step. The bm-info-<8_byte_uuid>.tar file must be placed on the HPE Ezmeral Runtime Enterprise Primary Controller, under /opt/bluedata/tmp/ext-bm-mapr/, before proceeding to the next step.
  2. Before Registration (On HPE Ezmeral Runtime Enterprise Primary Controller):

    Perform the following steps on the HPE Ezmeral Runtime Enterprise Primary Controller host.

    1. Ensure that HPE Ezmeral Runtime Enterprise is not currently in Site Lockdown.
    2. On the HPE Ezmeral Runtime Enterprise Primary Controller host, make sure that the path /opt/bluedata/tmp/ext-bm-mapr/ is created.
    3. Ensure that bm-info-<8_byte_uuid>.tar file is placed under /opt/bluedata/tmp/ext-bm-mapr/. Also, ensure that you do not have more than one bm-info-<uuid>.tar file under /opt/bluedata/tmp/ext-bm-mapr/.
    4. Create a new manifest file named ext-dftenant-manifest.<user-defined> under /opt/bluedata/tmp/ext-bm-mapr/ on the HPE Ezmeral Runtime Enterprise primary Controller host.
    5. Enter the following information in /opt/bluedata/tmp/ext-bm-mapr/ext-dftenant-manifest.user-defined :
      EXT_MAPR_MOUNT_DIR="/<directory_in_mount_path_for_volumes>"
      TENANT_VOLUME_NAME_TAG="<user_defined_tag_to_be_included_in_tenant_volume_names>" 
      • The EXT_MAPR_MOUNT_DIR is an optional parameter. This value must begin with a /. It must not equal / or /mapr. If you do not specify any value, a default value of /exthcp-< bdshared_global_uniqueid> is generated. The bdshared_global_uniqueid is automatically generated for the HPE Ezmeral installation.
      • The TENANT_VOLUME_NAME_TAG is an optional parameter, and it will be included as part of the name for every tenant volume (for the HPE Ezmeral instance) created on the Data Fabric cluster. If specified, the value must only contain characters that are allowed in a volume name, and must not contain the period (.) character.
      • The TENANT_VOLUME_NAME_TAG specified in ext-dftenant-manifest.user-defined influences the tenant volume names for tenants created after the Registration.
  3. Registration

    The ext-bm-df-registration action represents the overall Registration procedure for External HPE Ezmeral Data Fabric on Bare Metal.

    1. To complete the registration procedure, initiate the ext_register_dftenants action, using the following command:
      LOG_FILE_PATH=<path_to_log_file> /opt/bluedata/bundles/hpe-cp-*/startscript.sh --action ext-bm-df-registration

      The LOG_FILE_PATH specified must be a path that exists on all the HPE Ezmeral hosts.

    2. When prompted, enter the Platform Administrator username and password. HPE Ezmeral Runtime Enterprise uses this information for REST API access to its management module.
      NOTE
      The ext-bm-df-registration action validates the contents of bm-info-<8_byte_uuid>.tar, and finalizes the ext-dftenant-manifest. The following keys-values will be automatically added to the manifest:
      CLDB_LIST="<comma-separated;FQDN_or_IP_address_for_each_CLDB_node>"
      CLDB_PORT="<port_number_for_CLDB_service>"
      SECURE="<true_or_false>" (Default is true)
      CLUSTER_NAME="<name_of_DataFabric_cluster>"
      REST_URL="<REST_server_hostname:port>" (or space-delimited list of <REST_server_hostname:port> values)
      TICKET_FILE_LOCATION="<path_to_service_ticket_for_HCP_admin>"
      SSL_TRUSTSTORE_LOCATION="<path_to_ssl_truststore>"
      EXT_SECRETS_FILE_LOCATION="<path_to_external_secrets_file>"

      The ext-bm-df-registration action fails if volumes, which match per-tenant volume names, exist already on the external HPE Ezmeral Data Fabric on Bare Metal cluster.

      The result of the ext-bm-df-registration action is the following:
      • The Data Fabric client is deployed on the client ERE hosts.
      • For each existing tenant, a Data Fabric volume is created on the HPE Ezmeral Data Fabric on Bare Metal cluster.
      • For a new tenant (created in the future), a tenant volume will be created automatically, on the HPE Ezmeral Data Fabric on Bare Metal cluster.
      • Tenant volume names are in the form of <user-defined-prefix>-<bdshared_global_uniqueid>-tenant-<tenant-id>, where:
        • The user-defined-prefix is the value of TENANT_VOLUME_NAME_TAG, if it was specified in ext-dftenant-manifest.user-defined.
        • bdshared_global_uniqueid is an identifier generated automatically for the HPE Ezmeral installation.
        • tenant-id is a unique identifier for the relevant HPE Ezmeral tenant on the HPE Ezmeral instance.
      • Tenant Storage is configured to use the HPE Ezmeral Data Fabric on Bare Metal cluster, for all future tenants. And:
        • TenantStorage and TenantShare are created for all existing tenants on the Data Fabric cluster.
        • Both TenantShare and TenantStorage are available for all tenants.
      • The Registration action also reconfigures the following services:
        • Nagios, to track Data Fabric related client and mount services on the appropriate HPE Ezmeral Runtime Enterprise hosts.
        • WebHDFS, to enable browser-based file system operations, such as upload, mkdir.

    Future Kubernetes clusters created in the HPE Ezmeral Runtime Enterprise will have persistent volumes located under <df_cluster_name>/<ext_mapr_mount_dir>-<bdshared_global_uniqueid>/

    The registered HPE Ezmeral Data Fabric on Bare Metal cluster will be the backing for Storage Classes of future Kubernetes Compute clusters, that are created in the HPE Ezmeral Runtime Enterprise.

    The registration procedure does not modify the Storage Classes for Compute clusters, which existed before the registration.

  4. Validation:

    To confirm that the Registration is completed, check the following:

    1. Check the output and log of the ext-bm-df-registration action .
    2. On the HPE Ezmeral Runtime Enterprise Web UI, view the Tenant Storage tab on the System Settings page. Check that the information displayed on the screen is accurate for the HPE Ezmeral Data Fabric on Bare Metal cluster.
    3. On the HPE Ezmeral Runtime Enterprise, view the Kubernetes and EPIC Dashboards, and ensure that the POSIX Client and Mount Path services on all hosts are in normal state.
    4. On the HPE Ezmeral Runtime Enterprise web UI, as an authenticated user, check that you are able to browse Tenant Storage on an existing tenant. You can also try uploading a file to a directory under Tenant Storage, and reading the uploaded file. See Uploading and Downloading Files for more details.