Uninstalling and Reinstalling HPE Ezmeral Runtime Enterprise
There are many reasons why you may need to uninstall HPE Ezmeral Runtime Enterprise from the Controller host and any installed Worker hosts and then start over, such as:
- HPE Ezmeral Runtime Enterprise installed as
root
when you meant to install as a non-root user. In this case, a subsequent non-root installation will probably fail if the hosts have not been refreshed. If this happens, contact Hewlett Packard Enterprise for support. - Unrecoverable error.
- Configuration changes to the host or infrastructure.
- Moving from a test environment to a production environment.
There are two basic way to uninstall and reinstall HPE Ezmeral Runtime Enterprise:
- Completely refresh the Controller host and any Worker hosts to a "bare metal" state, reinstall the operating system, and then reinstall HPE Ezmeral Runtime Enterprise. This is the preferred method, because installation makes numerous configuration changes to the hosts in the deployment that are not completely reversible and that may impact the reinstallation process. Completely refreshing the hosts is beyond the scope of this documentation. Once the hosts are refreshed, you may begin the installation process again, as described in Installation Overview.
- Run the HPE Ezmeral Runtime Enterprise uninstaller on the Controller host and, if needed, on any Worker hosts. You may need to use this option if completely refreshing the hosts cannot be accomplished easily. This article describes this method.
Backing up the Configuration
If you plan to rebuild the deployment on another host and want to carry over settings from the deleted HPE Ezmeral Runtime Enterprise deployment, then back up the following:
- Collect a Level 2 support bundle. See Support Bundles Tab.
- Take screenshots of all platform and tenant/project settings. (The support bundle already captures these settings, but having screenshots will help you apply similar settings when redeploying HPE Ezmeral Runtime Enterprise.)
- Back up any customization changes, which includes but is not limited to:
- Authentication
Package:
/opt/bluedata/catalog/postconfig/userconfig.tgz
- Monitor changes:
/etc/curator.actions.yaml
(inside themonitor
container) - Custom feeds: Execute the
change_feed
command on the new HPE Ezmeral Runtime Enterprise deployment.
- Authentication
Package:
Running the Uninstaller
To run the uninstaller:
- Back up all data.
- Remove all FS mounts. See The FS Mounts Screen.
- Log into the host that you will be using as the Controller host using either the root account and password or your assigned username and password.
-
On the Controller host, execute the following command:
/opt/bluedata/bundles/<hpecp_install_folder>/startscript.sh --erase --force
-
Monitor the erase process and address any reported problem. The log file is located at
/tmp/worker_setup_<timestamp>
.If HPE Ezmeral Runtime Enterprise was not installed using the agent, then the Controller will delete the Worker and Gateway hosts remotely. In the unlikely event that remote deletion does not succeed, you can manually uninstall HPE Ezmeral Runtime Enterprise on each host by executing the following commands for a non-agent-based installation by proceeding to Step 8. If the procedure does work, then skip to Step 9.NOTE--erase --force
command uninstalls the HPE Ezmeral Runtime Enterprise installation. However, Python and Docker packages, that were installed during HPE Ezmeral Runtime Enterprise installation, will not be uninstalled. - For an agent-based installation, log in to the host and then execute the
following commands:
- Worker:
/opt/bluedata/bundles/<hpecp_install_folder>/<common-hpecp.bin> -ef --onworker --node-type worker --worker <worker-ip>
-
Gateway:
/opt/bluedata/bundles/<hpecp_install_folder>/<common-hpecp.bin> -ef --onworker --node-type proxy --gateway-node-ip <worker-ip>
- Worker:
- Reboot all of the hosts in the platform.
- Execute the following commands to verify that HPE Ezmeral Runtime Enterprise has been successfully deleted:
bdconfig -sysinfo
. The system should return the message command not found.rpm -qa | grep hpe-cp
. The system should return an empty response.
- Proceed as follows:
- If this host will not be reused as a Worker, then you have completed the uninstallation process.
- If you plan to reuse this host as a Worker, then proceed to the next step.
- Verify that the
VolBDSCStore
thin pool volume has been deleted. If not, then you will need to delete the volume before proceeding. The following example shows thatVolBDSCStore
still exists on the disk partition/dev/sdc
:lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.7G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 465.2G 0 part ├─rootvg-lv_root 253:0 0 200G 0 lvm / ├─rootvg-lv_swap 253:1 0 54G 0 lvm [SWAP] └─rootvg-lv_var_log_bluedata 253:4 0 100G 0 lvm /var/log/bluedata sdb 8:16 0 3.7T 0 disk ├─bluedatavg-lv_opt_bluedata 253:5 0 300G 0 lvm /opt/bluedata ├─bluedatavg-lv_srv 253:6 0 300G 0 lvm /srv └─bluedatavg-lv_wb 253:7 0 500G 0 lvm /wb sdc 8:32 0 3.7T 0 disk └─sdc1 8:33 0 3.7T 0 part ├─VolBDSCStore-thinpool_tmeta 253:2 0 15.8G 0 lvm │ └─VolBDSCStore-thinpool 253:8 0 36T 0 lvm └─VolBDSCStore-thinpool_tdata 253:3 0 36T 0 lvm └─VolBDSCStore-thinpool 253:8 0 36T 0 lvm
-
Delete the volume group
VolBDSCStore
by executing the following command:sudo vgremove VolBDSCStore
-
Delete all physical volumes being used for the volume group
VolBDSCStore
by executing the following command:sudo pvremove $(pvs | grep VolBDSCStore | awk '{print $1}')
-
If the above steps do not delete volume, then consider using a "brute force" method, such as
wipefs
, as follows:sudo wipefs -a -f /dev/sdc
- Ensure that
/var/lib/docker
is empty. If not, then delete everything below/var/lib/docker
. - Verify that
/etc/sysconfig/docker-storage
hasDOCKER_STORAGE_OPTIONS
equal to nothing (e.g.DOCKER_STORAGE_OPTIONS=
).