Offline and Manual Upgrade Procedure
The offline, manual upgrade procedure is suitable for upgrading small clusters. On large clusters, these steps are commonly performed on all nodes in parallel using scripts or remote management tools.
This procedure assumes that you have planned and prepared for the upgrade as described earlier. This procedure also assumes that the cluster meets prerequisites, including the correct JDK for the core version to which you are upgrading. For more information, see the JDK Support Matrix.
root
user or with
sudo.At the end of this procedure, you use yum update
or zypper
update
on RHEL or SLES to upgrade the packages. Ignore any warnings
that certain packages are not installed. Packages will be upgraded correctly, and no
additional packages will be installed.
This procedure assumes that the cluster being upgraded is running release 6.1.x, 6.2.0, 7.0.0, 7.1.0, 7.2.0, 7.3.0, 7.4.0, 7.5.0, 7.6.x, 7.7.0, or 7.8.0. The procedure also assumes that the cluster to be upgraded is secure. Non-secure clusters must be secured before they can be upgraded. See Securing the Cluster Before Upgrading.
-
Notify stakeholders of the impending upgrade, and stop accepting new jobs and applications. Terminate running jobs and applications by running
For YARN applications, use the following commands:maprcli
commands on appropriate nodes in the cluster.# yarn application -list # yarn application -kill <ApplicationId>
-
Disconnect NFS mounts. Unmount NFS for the HPE Ezmeral Data Fabric share from all clients connected to it, including other nodes in the cluster. This allows all processes accessing the cluster via NFS to disconnect gracefully.
For example, if the cluster is mounted at/mapr
, use this command:# umount /mapr
-
Display the services on each node in the cluster, and stop ecosystem component services on the nodes.
# maprcli node list -columns hostname,csvc # maprcli node services -multi '[{ "name": "hue", "action": "stop"}, { "name": "oozie", "action": "stop"}, { "name": "hs2", "action": "stop"}]' -nodes <hostnames>
-
If a POSIX client service is running, stop the service:
- For the
mapr-loopbacknfs
service:service mapr-loopbacknfs stop
- For the FUSE-based POSIX basic
service:
service mapr-posix-client-basic stop
- For the FUSE-based POSIX platinum
service:
service mapr-posix-client-platinum stop
- For the
-
Determine where the CLDB and ZooKeeper services are installed:
maprcli node listcldbs -cluster my.cluster.com -json
-
Stop Warden on the CLDB nodes first, and then on all remaining nodes:
sudo service mapr-warden stop
-
Stop ZooKeeper on all nodes where it is installed:
sudo service mapr-zookeeper stop
-
Ensure that no stale cluster processes are running. If so, stop the processes:
ps -ef | grep mapr pkill -u mapr
-
Remove any existing patches:
-
Run one of the following commands to determine if a patch is installed.
- RHEL and SLES:
rpm -qa mapr-patch
- Ubuntu:
dpkg -l | grep mapr-patch
- RHEL and SLES:
-
If one or more patches are installed, run one of the following commands to remove the patches:
- RHEL or SLES:
sudo rpm -e mapr-patch
- Ubuntu:
sudo apt-get -y remove mapr-patch
- RHEL or SLES:
-
-
Upgrade core packages by installing the appropriate package key.
- RHEL:
sudo rpm --import https://package.ezmeral.hpe.com/releases/pub/maprgpg.key
- SLES: No package key needed.
- Ubuntu:
wget -O - https://package.ezmeral.hpe.com/releases/pub/maprgpg.key | sudo apt-key add -
- RHEL:
- Use the following command to view the Java alternatives menu, and set Java to
JDK 11 or JDK
17:
sudo update-alternatives --config java
-
Upgrade these core component and Hadoop common packages on all nodes where packages exist.
Components to upgrade are:-
mapr-cldb
mapr-client
-
mapr-core
-
mapr-core-internal
-
mapr-fileserver
mapr-gateway
mapr-hadoop-client
-
mapr-hadoop-core
mapr-hadoop-util
-
mapr-historyserver
mapr-keycloak
(for upgrades from release 7.5.0 or later)-
mapr-nfs
-
mapr-nodemanager
-
mapr-resourcemanager
-
mapr-webserver
-
mapr-zookeeper
mapr-zk-internal
yum update
orzypper update
, do not use a wildcard such asmapr-*
to upgrade all data-fabric packages, which could erroneously include Hadoop ecosystem components such asmapr-hive
andmapr-pig
.- RHEL:
yum update mapr-cldb mapr-core mapr-core-internal mapr-gateway mapr-fileserver mapr-hadoop-core mapr-historyserver mapr-nfs mapr-nodemanager mapr-resourcemanager mapr-webserver mapr-zookeeper mapr-zk-internal mapr-client mapr-hadoop-client mapr-hadoop-util
- SLES:
zypper update --allow-vendor-change mapr-cldb mapr-compat-suse mapr-core mapr-core-internal mapr-gateway mapr-fileserver mapr-hadoop-core mapr-historyserver mapr-mapreduce2 mapr-nfs mapr-nodemanager mapr-resourcemanager mapr-webserver mapr-zookeeper mapr-zk-internal mapr-client mapr-hadoop-client mapr-hadoop-util
- Ubuntu: First get a list of the data-fabric packages installed on the node,
and then run
apt-get install
on the listed packages.# dpkg --list | grep "mapr" | grep -P "^ii"| awk '{ print $2}'|tr "\n" " " # apt-get install <package-list>
-
-
Verify that packages were installed successfully on all nodes. Confirm that there were no errors during installation, and check that
For example:/opt/mapr/MapRBuildVersion
contains the expected value.# cat /opt/mapr/MapRBuildVersion 7.9.0.0.2024xxxxxxxxxx.GA