Upgrading the CSI Plug-In

Use this procedure to upgrade the CSI plug-in on Kubernetes clusters that are not HPE Ezmeral Data Fabric on Kubernetes clusters in deployments of HPE Ezmeral Runtime Enterprise 5.3.5 or later. Upgrading the CSI plug-in requires restarting or recreating the affected pods.

About this task

For deployments of HPE Ezmeral Runtime Enterprise 5.3.5 or later, this procedure describes how to upgrade the CSI plug-in on Kubernetes clusters that are not HPE Ezmeral Data Fabric on Kubernetes clusters. The upgrade requires restarting or recreating pods in the cluster that use a persistent volume claim (PVC).

If this deployment of HPE Ezmeral Runtime Enterprise includes a HPE Ezmeral Data Fabric on Kubernetes cluster, the CSI-plugin for that cluster is managed as part of the HPE Ezmeral Data Fabric on Kubernetes deployment, and this procedure does not apply.

If pods on this cluster use a persistent volume claim (PVC) provisioned through HPE CSI driver 1.0.x or 1.1.x, before you upgrade from Kubernetes 1.18.x, upgrade the HPE CSI driver to version 1.2.7-1.0.7.

For HPE Ezmeral Runtime Enterprise5.5.0 and 5.5.1, see workaround EZCP-3738 in Issues and Workarounds.

Procedure

  1. Instruct users not to use pods in this cluster until after the upgrade.
  2. Using SSH, log in to the Kubernetes Master node.
  3. Execute the following command to create a directory:
    mkdir /opt/bluedata/common-install/scripts/tools/hpe-csi-upgrade
  4. Copy the new directory created on Master node to /opt/hpe/kubernetes/tools/hpe-csi-upgrade on the Controller node, using the following command:
    scp -r <install_user>@<controller-IP>:/opt/hpe/kubernetes/tools/hpe-csi-upgrade /opt/bluedata/common-install/scripts/tools/hpe-csi-upgrade
  5. Update the permissions of the directory using following commands:
    chmod -R 755 /opt/bluedata/common-install/scripts/tools/hpe-csi-upgrade
    chown -R <install_user>:<install_group> /opt/bluedata/common-install/scripts/tools/hpe-csi-upgrade
  6. Stop or Delete the pods that use a persistent volume claim (PVC) provisioned the HPE CSI driver, using the following steps:
    1. Execute the following command to list the pods that needs to be stopped, or deleted :
      cd /opt/bluedata/common-install/scripts/tools/hpe-csi-upgrade
      ./find_pods.sh

      Samples of Output:

      Output 1:
      [user1@host1 ~]$ ./find_pods.sh
      Namespace: ns1, Pod(s): pod-1-1 pod-1-2
      Namespace: ns2, Pod(s): pod-2-1 pod-2-2

      Or

      Output 2:
      [user1@host1 ~]$ ./find_pods.sh
      None of pods mount the PVC/PV provisioned by DF CSI driver
    2. Do one of the following:
      • If the pod you identified in the previous step was launched by a pod object, backup the pod, and then delete the pod.
        Backup the pod, using the following command:
        kubectl -n <namespace> get pods <podname> -o yaml > backup_<unique_name>.yaml
        Delete the pod, using the following command:
        kubectl -n <namespace> delete pods <podname>
      • If the pod was launched by DaemonSet, set non-existing=true in nodeSelector, using the following command:
        kubectl -n <namespace> patch daemonset <name-of-daemon-set> -p '{"spec": {"template": {"spec": {"nodeSelector": {"non-existing": "true"}}}}}'
      • If the pod was launched by StatefulSet, Deployment, or ReplicaSet object, set 'replicas=0', using the following command:
        kubectl -n <namespace> scale <object-type>/<object-name> --replicas=0 
  7. Change directories to the hpe-csi-upgrade directory:
    cd /opt/bluedata/common-install/scripts/tools/hpe-csi-upgrade
  8. Execute the hpe-csi-upgrade script, specifying the new CSI versions as follows:
    ./hpe-csi-upgrade.sh -u <CSI-FUSE-version>-<loopback-NFS-version>

    For example:

    ./hpe-csi-upgrade.sh -u 1.2.7-1.0.7
  9. Restart or recreate all pods that you stopped or deleted at Step 6:, using the steps that follow:
    • If it is Pod object, recreate the pod, using the following command:
      kubectl apply -f backup_<unique_name>.yaml
    • If it is DaemonSet object, delete the nodeSelector, using the following command:
      kubectl -n <namespace> patch daemonset <name-of-daemon-set> --type json -p='[{"op": "remove", "path": "/spec/template/spec/nodeSelector/non-existing"}]'
    • If it is StatefulSet, Deployment, or ReplicaSet object, set 'replicas=X'. Where X is non-zero, using the following command:
      kubectl -n <namespace> scale <object-type>/<object-name> --replicas=X