Offline and Manual Upgrade Procedure

The offline, manual upgrade procedure is suitable for upgrading small clusters. On large clusters, these steps are commonly performed on all nodes in parallel using scripts or remote management tools.

This procedure assumes that you have planned and prepared for the upgrade as described earlier. This procedure also assumes that the cluster meets prerequisites, including the correct JDK for the core version to which you are upgrading. For more information, see the JDK Support Matrix.

NOTE
An offline upgrade is performed as the root user or with sudo.

At the end of this procedure, you use yum update or zypper update on RHEL or SLES to upgrade the packages. Ignore any warnings that certain packages are not installed. Packages will be upgraded correctly, and no additional packages will be installed.

This procedure assumes that the cluster being upgraded is running release 6.1.x, 6.2.0, 7.0.0, 7.1.0, 7.2.0, 7.3.0, 7.4.0, 7.5.0, 7.6.x, 7.7.0, or 7.8.0:

  1. Notify stakeholders of the impending upgrade, and stop accepting new jobs and applications. Terminate running jobs and applications by running maprcli commands on appropriate nodes in the cluster.

    For YARN applications, use the following commands:
    # yarn application -list
    # yarn application -kill <ApplicationId>
  2. Disconnect NFS mounts. Unmount NFS for the HPE Ezmeral Data Fabric share from all clients connected to it, including other nodes in the cluster. This allows all processes accessing the cluster via NFS to disconnect gracefully.

    For example, if the cluster is mounted at /mapr, use this command:
    # umount /mapr
  3. Display the services on each node in the cluster, and stop ecosystem component services on the nodes.

    # maprcli node list -columns hostname,csvc
    # maprcli node services -multi '[{ "name": "hue", "action": "stop"}, { "name": "oozie", "action": "stop"}, { "name": "hs2", "action": "stop"}]' -nodes <hostnames>
  4. If a POSIX client service is running, stop the service:

    • For the mapr-loopbacknfs service:
      service mapr-loopbacknfs stop
    • For the FUSE-based POSIX basic service:
      service mapr-posix-client-basic stop
    • For the FUSE-based POSIX platinum service:
      service mapr-posix-client-platinum stop
  5. Determine where the CLDB and ZooKeeper services are installed:
    maprcli node listcldbs -cluster my.cluster.com -json
  6. Stop Warden on the CLDB nodes first, and then on all remaining nodes:
    sudo service mapr-warden stop
  7. Stop ZooKeeper on all nodes where it is installed:
    sudo service mapr-zookeeper stop
  8. Ensure that no stale cluster processes are running. If so, stop the processes:

    ps -ef | grep mapr
    pkill -u mapr
  9. Remove any existing patches:
    1. Run one of the following commands to determine if a patch is installed.

      • RHEL and SLES: rpm -qa mapr-patch
      • Ubuntu: dpkg -l | grep mapr-patch
      If the command displays no output, no patch is installed.
    2. If one or more patches are installed, run one of the following commands to remove the patches:

      • RHEL or SLES: sudo rpm -e mapr-patch
      • Ubuntu: sudo apt-get -y remove mapr-patch
  10. Upgrade core packages by installing the appropriate package key.

    • RHEL: sudo rpm --import https://package.ezmeral.hpe.com/releases/pub/maprgpg.key
    • SLES: No package key needed.
    • Ubuntu: wget -O - https://package.ezmeral.hpe.com/releases/pub/maprgpg.key | sudo apt-key add -
  11. Use the following command to view the Java alternatives menu, and set Java to JDK 11 or JDK 17:
    sudo update-alternatives --config java
  12. Upgrade these core component and Hadoop common packages on all nodes where packages exist.

    Components to upgrade are:
    • mapr-cldb
    • mapr-client
    • mapr-core
    • mapr-core-internal
    • mapr-fileserver
    • mapr-gateway
    • mapr-hadoop-client
    • mapr-hadoop-core
    • mapr-hadoop-util
    • mapr-historyserver
    • mapr-keycloak (for upgrades from release 7.5.0 or later)
    • mapr-nfs
    • mapr-nodemanager
    • mapr-resourcemanager
    • mapr-webserver
    • mapr-zookeeper
    • mapr-zk-internal
    When using yum update or zypper update, do not use a wildcard such as mapr-* to upgrade all data-fabric packages, which could erroneously include Hadoop ecosystem components such as mapr-hive and mapr-pig.
    • RHEL:
      yum update mapr-cldb mapr-core mapr-core-internal mapr-gateway mapr-fileserver mapr-hadoop-core mapr-historyserver mapr-nfs mapr-nodemanager mapr-resourcemanager mapr-webserver mapr-zookeeper mapr-zk-internal mapr-client mapr-hadoop-client mapr-hadoop-util
    • SLES:
      zypper update --allow-vendor-change mapr-cldb mapr-compat-suse mapr-core mapr-core-internal mapr-gateway mapr-fileserver mapr-hadoop-core mapr-historyserver mapr-mapreduce2 mapr-nfs mapr-nodemanager mapr-resourcemanager mapr-webserver mapr-zookeeper mapr-zk-internal mapr-client mapr-hadoop-client mapr-hadoop-util
    • Ubuntu: First get a list of the data-fabric packages installed on the node, and then run apt-get install on the listed packages.
      # dpkg --list | grep "mapr" | grep -P "^ii"| awk '{ print $2}'|tr "\n" " "
      # apt-get install <package-list>
  13. Verify that packages were installed successfully on all nodes. Confirm that there were no errors during installation, and check that /opt/mapr/MapRBuildVersion contains the expected value.

    For example:
    # cat /opt/mapr/MapRBuildVersion
    7.9.0.0.2024xxxxxxxxxx.GA
See Post-Upgrade Steps for Core