Agent-Based Kubernetes Host Installation

If your environment does not allow key-based SSH, then you must run the command line agent installation described in this article on each Kubernetes Worker host being added before adding the hosts using the web interface.

NOTE These instructions assume that the Controller host was installed with the option --worker-agent-install. If that was not done and if you do not want to reinstall the Controller host with that option specified, then please contact HPE Technical Support for possible options.
NOTE If your environment does allow key-based SSH on all of the hosts, then you may bypass this step and proceed directly to Kubernetes Host Step 1: Add the Public SSH Key.

To install the agent on each Kubernetes host:

  1. If you encountered any errors while pre-checking and/or installing HPE Ezmeral Runtime Enterprise on the Controller from the command line, then be sure to replicate the same remediation steps on each Worker host you will be adding before proceeding with the installation.
  2. Copy the .erlang.cookie file from the Controller host to the Kubernetes hosts you are adding. This file is located in the home directory of the user who installed HPE Ezmeral Runtime Enterprise. This step is required to allow secure communications between hosts.
  3. Manually copy the HPE Ezmeral Runtime Enterprise Enterprise binary (.bin) from http://<controller-ip>/repos/common-cp-<os>-release-<version>-<build>.bin to each Worker host that you will adding, where:
    • <controller_ip> is the IP address of the Controller host.
    • <os> is the operating system (either rhel or sles).
    • <version> is the .bin version.
    • <build> is the specific .bin build number.
    NOTE The remainder of this article will refer to this .bin file as <common>.bin.
  4. Make the .bin file executable by executing the command chmod a+x <common>.bin.
  5. Download the .parms file from http://<controller-ip>/repos/agent-install-worker.params
  6. Modify the relevant settings in /tmp/agent-install-worker.parms to the appropriate values. The .params file with these edits will be used on every Kubernetes Worker host.
    • Set the Controller host parameter: The Controller parameter settings vary based on whether or not platform HA is enabled.
      • If platform HA is not enabled, then you must set the HAENABLED (platform High Availability Enabled) field to false and provide both the Controller IP address and hostname in the Platform HA not configured section.

        ################################################################################
                                #                          Platform HA not configured                          #
                                # Ensure the appropriate parameters are uncommented and set in this section    #
                                # when Platform HA is not enabled.                                             #
                                ################################################################################
                                
                                ## Is PLHA enabled?
                                #HAENABLED=false
        Note: Uncomment this.
        ## Controller node's IP address.
                                #CONTROLLER=<Controller IP address>
        Note: Uncomment this and provide the Controller host IP address.
        ## Controller node's FQDN.
                                #CONTROLLER_HOSTNAME=<FQDN of controller>
        Note: Uncomment this and provide the Controller hostname. The Controller hostname must be all lower case set as per the Linux hostname naming convention.
      • If platform HA is enabled, then you must set the HAENABLED (Platform High Availability Enabled) field to false and provide both the IP address and hostname for the Controller, Shadow Controller, and Arbiter hosts in the Platform HA configured section.

        Further, if the deployment uses a Cluster IP address, then you must set CLUSTERIP (Cluster IP address); otherwise, you can leave it commented.

        ################################################################################
                                #                            Platform HA configured                            #
                                # Ensure the appropriate parameters are uncommented and set in this section    #
                                # when Platform HA is not enabled.                                             #
                                ################################################################################
                                
                                ## Is Platform HA enabled?
                                #HAENABLED=true
        Note: Uncomment this.
        ## The cluster IP address.
                                #CLUSTERIP=<Cluster IP address>
        Note: Uncomment this if a Cluster IP address is used.
        ## Controller node's IP address. A failover to okay but, his node must be alive
                                ## for a worker to be added.
                                #CONTROLLER=<Controller IP address>
        Note: Uncomment this and provide the Controller IP address.
        ## The original shadow controller node's IP address. This node must be alive for
                                ## the worker node to be added.
                                #SHADOWCTRL=<Shadow IP address>
        Note: Uncomment this and then provide the Shadow IP address.
        ## The arbiter node's IP address. This node must be alive for the worker node to
                                ## be added.
                                #ARBITER=<Arbiter IP address>
        Note: Uncomment this and then provide the Arbiter IP address.
        ## Controller node's FQDN.
                                #CONTROLLER_HOSTNAME=<FQDN of controller>
        Note: Uncomment this and then provide the Controller hostname.
        ## Shadow controller node's FQDN.
                                #SHADOW_HOSTNAME=<FQDN of Shadow>
        Note: Uncomment this and then provide the Shadow hostname. The Shadow hostname must be all lower case set as per the Linux hostname naming convention.
        ## Arbiter node's FQDN.
                                #ARBITER_HOSTNAME=<FQDN of Arbiter>
        Note: Uncomment this and then provide the Arbiter hostname. The Arbiter hostname must be all lower case set as per the Linux hostname naming convention.
    • Set the installation userid and groupid parameters: If you have already created a system account on the Controller host, then you will need to set the BLUEDATA_USER and BLUEDATA_GROUP values accordingly.

      ################################################################################
                      #                         Installation user and group                          #
                      # All nodes in the HPE physical cluster must be installed as the same user.    #
                      # Specify this if the common bundle is not being executed by the same user as  #
                      # the user that will be running the HPE services. Please refer to the          #
                      # System requirements guide for information on permissions required for a      #
                      # non-root user to install and run HPE software.                               #
                      ################################################################################
                      
                      #BLUEDATA_USER=root
      Note: Uncomment this and then provide the user id, as appropriate.
      #BLUEDATA_GROUP=root
      Note: Uncomment this and then provide the group id, as appropriate.

    • Set other miscellaneous parameters: Set the following parameters to match the Controller host settings.

      ################################################################################
                      #                           Miscellaneous parameters                           #
                      #                                                                              #
                      ################################################################################
                      
                      ## Automount root on the controller node. It must be the same on the worker too.
                      CONTROLLER_AUTOMOUNT_ROOT=/net/
      Note: Modify this if needed.
      ## Bundle flavor used to install the controller. This may be either 'minimal' or
                      ## 'full'
                      CONTROLLER_BUNDLE_FLAVOR=minimal
      Note: Modify this if needed.
      ## Skip configuring NTP? 'true' or 'false'
                      #NO_NTP_CONFIG=false
      Note: Modify this, as appropriate.
      ## If the controller was configured with proxy information, please specify it
      
      
                      ## for the worker too.
      
      
                      #PROXY_URL=
      Note: Set this if the Controller is configured with a proxy.
      #NO_PROXY=
      Set this is the Controller was configured with the --no-proxy option during installation.
      ## Controls whether the server should rollback to a clean state when an error
                      ## is encountered during installation. Setting it to 'false' helps with debugging
                      ## but the server should be manually cleaned up before re-attempting the
                      ## installation.
                      
                      ## Values: 'true' or 'false'.
                      #ROLLBACK_ON_ERROR='false'
                      
                      # If the controller was configured with --dockerrootsize that is different from 20
                      # specify it here.
                      DOCKER_ROOTSIZE=20
      Note: Set this, if applicable.
  7. Set the Erlang parameter:

    ERLANG_COOKIE=value stored in <controller-ip>$HOME/.erlang.cookie
  8. Copy the modified version of the .parms file onto every new Kubernetes Worker host.
  9. On each Worker host, execute the installer precheck using one of the following commands, where <A.B.C.D> is the IP address of the host, and <name> is the FQDN of the host:
    • Kubernetes host:/tmp/<precheck>.bin --params /tmp/agent-install-worker.parms --nodetype k8shost --worker <A.B.C.D> --workerhostname <name>
  10. If needed, remediate any issues reported by the installer script, and then re-run the same pre-check script until all tests pass or until you have accounted for any warnings.
  11. Run the common install ,bin:

    <controller-ip>/opt/bluedata/bundles/common-cp-<version>-<build>.bin
  12. Copy the file /opt/bluedata/keys/authorized_keys from the Controller host to the same location on the new Kubernetes Worker host, with the same owner/group, permissions, and SELinux context.
NOTE Hewlett Packard Enterprise recommends to update to the latest OS packages (e.g. yum update) before installing HPE Ezmeral Runtime Enterprise.

After the installation completes, you should see the message Successfully prepared server as a HPE CP Kubernetes node. Proceed directly to Kubernetes Host: Select the Hosts, as appropriate.

If the installation fails, then erase HPE Ezmeral Runtime Enterprise from the host by executing the command /tmp/<common>.bin --erase (or sudo /tmp/<common>.bin --erase, or SUDO_PREFIX="mysudo"; /tmp/<common>.bin --erase. The instructions contained in Step 1 Troubleshooting for the Controller host can also help you remediate problems on this host or hosts.