Shutting Down a Data Fabric Cluster

This procedure performs an orderly shut down of Data Fabric clusters that implement HPE Ezmeral Data Fabric on Kubernetes. This procedure does not apply to bare-metal HPE Ezmeral Data Fabric clusters. This procedure does not shut down the entire HPE Ezmeral Runtime Enterprise.

Prerequisites

  • Run the edf check/report commands to verify that the cluster is fully functional. All issues must be resolved before shutting down the cluster.

  • Ensure that all tenant services that read or write to this Data Cluster and all tenant applications, such as Spark, are stopped.

  • Ensure that no Data Fabric operations, including file replication or mirroring operations, are in progress.

You must have access to the admin CLI pod (default name: admincli-0).

About this task

When you use the edf shutdown cluster command, pods are shut down and are rebooted, but the pods are put into a wait state immediately after the reboot, which prevents the Data Fabric cluster from becoming operational. When you are ready to resume operations, you can use the edf startup resume command to start the Data Fabric cluster.

NOTE
  • This procedure applies to the Data Fabric cluster only. This procedure does not shut down the entire HPE Ezmeral Runtime Enterprise.
  • This procedure does not apply to bare-metal HPE Ezmeral Data Fabric clusters.

Procedure

  1. Access the admin CLI pod.
    For example:
    kubectl exec -it admincli-0 -n <namespace> -- /bin/bash 
  2. Execute the edf shutdown cluster command.
    For example:
    edf shutdown cluster

    Data Fabric cluster pods, such as MFS and CLDB, are shut down and rebooted, and then they are put into a wait state.

  3. After you complete the upgrade or maintenance task, resume operations on the pods by entering the edf startup resume command.