Agent-Based Kubernetes Host Installation
If your environment does not allow key-based SSH, then you must run the command line agent installation described in this article on each Kubernetes Worker host being added before adding the hosts using the web interface.
--worker-agent-install
.
If that was not done and if you do not want to reinstall the
Controller host with that option specified, then please contact HPE
Technical Support for possible options.
To install the agent on each Kubernetes host:
- If you encountered any errors while pre-checking and/or installing HPE Ezmeral Runtime Enterprise on the Controller from the command line, then be sure to replicate the same remediation steps on each Worker host you will be adding before proceeding with the installation.
- Copy the
.erlang.cookie
file from the Controller host to the Kubernetes hosts you are adding. This file is located in the home directory of the user who installed HPE Ezmeral Runtime Enterprise. This step is required to allow secure communications between hosts. - Manually copy the HPE Ezmeral Runtime Enterprise Enterprise binary (.bin) from
http://<controller-ip>/repos/common-cp-<os>-release-<version>-<build>.bin
to each Worker host that you will adding, where:<controller_ip>
is the IP address of the Controller host.<os>
is the operating system (eitherrhel
orsles
).<version>
is the .bin version.<build>
is the specific .bin build number.
NOTEThe remainder of this article will refer to this .bin file as<common>.bin
. - Make the .bin file executable by executing the command
chmod a+x <common>.bin.
- Download the
.parms
file fromhttp://<controller-ip>/repos/agent-install-worker.params
- Modify the relevant settings in
/tmp/agent-install-worker.parms
to the appropriate values. The.params
file with these edits will be used on every Kubernetes Worker host.- Set the Controller host
parameter: The Controller parameter settings vary based on whether
or not platform HA is enabled.
If platform HA is not enabled, then you must set the
HAENABLED
(platform High Availability Enabled) field tofalse
and provide both the Controller IP address and hostname in thePlatform HA not configured
section.
Note: Uncomment this.################################################################################ # Platform HA not configured # # Ensure the appropriate parameters are uncommented and set in this section # # when Platform HA is not enabled. # ################################################################################ ## Is PLHA enabled? #HAENABLED=false
Note: Uncomment this and provide the Controller host IP address.## Controller node's IP address. #CONTROLLER=<Controller IP address>
Note: Uncomment this and provide the Controller hostname. The Controller hostname must be all lower case set as per the Linux hostname naming convention.## Controller node's FQDN. #CONTROLLER_HOSTNAME=<FQDN of controller>
If platform HA is enabled, then you must set the
HAENABLED
(Platform High Availability Enabled) field tofalse
and provide both the IP address and hostname for the Controller, Shadow Controller, and Arbiter hosts in thePlatform HA configured
section.Further, if the deployment uses a Cluster IP address, then you must set
CLUSTERIP
(Cluster IP address); otherwise, you can leave it commented.
Note: Uncomment this.################################################################################ # Platform HA configured # # Ensure the appropriate parameters are uncommented and set in this section # # when Platform HA is not enabled. # ################################################################################ ## Is Platform HA enabled? #HAENABLED=true
Note: Uncomment this if a Cluster IP address is used.## The cluster IP address. #CLUSTERIP=<Cluster IP address>
Note: Uncomment this and provide the Controller IP address.## Controller node's IP address. A failover to okay but, his node must be alive ## for a worker to be added. #CONTROLLER=<Controller IP address>
Note: Uncomment this and then provide the Shadow IP address.## The original shadow controller node's IP address. This node must be alive for ## the worker node to be added. #SHADOWCTRL=<Shadow IP address>
Note: Uncomment this and then provide the Arbiter IP address.## The arbiter node's IP address. This node must be alive for the worker node to ## be added. #ARBITER=<Arbiter IP address>
Note: Uncomment this and then provide the Controller hostname.## Controller node's FQDN. #CONTROLLER_HOSTNAME=<FQDN of controller>
Note: Uncomment this and then provide the Shadow hostname. The Shadow hostname must be all lower case set as per the Linux hostname naming convention.## Shadow controller node's FQDN. #SHADOW_HOSTNAME=<FQDN of Shadow>
Note: Uncomment this and then provide the Arbiter hostname. The Arbiter hostname must be all lower case set as per the Linux hostname naming convention.## Arbiter node's FQDN. #ARBITER_HOSTNAME=<FQDN of Arbiter>
Set the installation userid and groupid parameters: If you have already created a system account on the Controller host, then you will need to set the
BLUEDATA_USER
andBLUEDATA_GROUP
values accordingly.
Note: Uncomment this and then provide the user id, as appropriate.################################################################################ # Installation user and group # # All nodes in the HPE physical cluster must be installed as the same user. # # Specify this if the common bundle is not being executed by the same user as # # the user that will be running the HPE services. Please refer to the # # System requirements guide for information on permissions required for a # # non-root user to install and run HPE software. # ################################################################################ #BLUEDATA_USER=root
Note: Uncomment this and then provide the group id, as appropriate.#BLUEDATA_GROUP=root
Set other miscellaneous parameters: Set the following parameters to match the Controller host settings.
Note: Modify this if needed.################################################################################ # Miscellaneous parameters # # # ################################################################################ ## Automount root on the controller node. It must be the same on the worker too. CONTROLLER_AUTOMOUNT_ROOT=/net/
Note: Modify this if needed.## Bundle flavor used to install the controller. This may be either 'minimal' or ## 'full' CONTROLLER_BUNDLE_FLAVOR=minimal
Note: Modify this, as appropriate.## Skip configuring NTP? 'true' or 'false' #NO_NTP_CONFIG=false
Note: Set this if the Controller is configured with a proxy.## If the controller was configured with proxy information, please specify it ## for the worker too. #PROXY_URL=
Set this is the Controller was configured with the#NO_PROXY=
--no-proxy
option during installation.
Note: Set this, if applicable.## Controls whether the server should rollback to a clean state when an error ## is encountered during installation. Setting it to 'false' helps with debugging ## but the server should be manually cleaned up before re-attempting the ## installation. ## Values: 'true' or 'false'. #ROLLBACK_ON_ERROR='false' # If the controller was configured with --dockerrootsize that is different from 20 # specify it here. DOCKER_ROOTSIZE=20
- Set the Controller host
parameter: The Controller parameter settings vary based on whether
or not platform HA is enabled.
-
Set the Erlang parameter:
ERLANG_COOKIE=value stored in <controller-ip>$HOME/.erlang.cookie
- Copy the modified version of the
.parms
file onto every new Kubernetes Worker host. - On each Worker host, execute the installer precheck using one of the following
commands, where
<A.B.C.D>
is the IP address of the host, and<name>
is the FQDN of the host:- Kubernetes
host:
/tmp/<precheck>.bin --params /tmp/agent-install-worker.parms --nodetype k8shost --worker <A.B.C.D> --workerhostname <name>
- Kubernetes
host:
- If needed, remediate any issues reported by the installer script, and then re-run the same pre-check script until all tests pass or until you have accounted for any warnings.
-
Run the common install ,bin:
<controller-ip>/opt/bluedata/bundles/common-cp-<version>-<build>.bin
- Copy the file
/opt/bluedata/keys/authorized_keys
from the Controller host to the same location on the new Kubernetes Worker host, with the same owner/group, permissions, and SELinux context.
After the installation completes, you should see the message Successfully
prepared server as a HPE CP Kubernetes node.
Proceed directly to Kubernetes Host: Select the Hosts, as appropriate.
If the installation fails, then erase HPE Ezmeral Runtime Enterprise from the host by executing the command
/tmp/<common>.bin --erase
(or sudo
/tmp/<common>.bin --erase
, or SUDO_PREFIX="mysudo";
/tmp/<common>.bin --erase
. The instructions contained in Step 1 Troubleshooting for the Controller host can also help you
remediate problems on this host or hosts.