About HPE Ezmeral Data Fabric on Kubernetes

A typical Kubernetes environment may have pods frequently coming and going. Large Kubernetes environments, such as in a public cloud, may handle pools of systems where new hosts are added to support pod and cluster placement. In HPE Ezmeral Runtime Enterprise, a Data Fabric cluster is a Kubernetes Custom Resource that functions as a storage cluster that provides access to PVCs, tenant storage, shares, and other storage needs.

HPE Ezmeral Data Fabric on Kubernetes is not supported in HPE Ezmeral Runtime Enterprise Essentials.

In a Data Fabric cluster:

  • The hosts (called nodes) commit considerable disk resources that may include NVMe and enterprise-class SSDs.
  • The Data Fabric cluster may only need to come up on a few nodes.
  • Pods are unlikely to be deleted frequently;
  • The Data Fabric CR must account for host resource profiles to guarantee core pod availability.

HPE Ezmeral Runtime Enterprise includes native support for HPE Ezmeral Data Fabric. This avoids many manual steps and allows you to create Data Fabric clusters in a manner similar to that used for creating Compute Kubernetes clusters (see Creating a New Data Fabric Cluster and Creating a New Kubernetes Cluster). Each Data Fabric cluster resides on nodes. See Kubernetes Worker Installation Overview and Kubernetes Data Fabric Node Installation Overview.


HPE Ezmeral Runtime Enterprise automates the following functionality for a Data Fabric backed by a HPE Ezmeral Data Fabric on Kubernetes cluster:

  • Pre-checking nodes before tagging them for use with HPE Ezmeral Data Fabric on Kubernetes clusters.
  • Checking for sufficient resources to being up core and service pods when creating a HPE Ezmeral Data Fabric on Kubernetes cluster
  • Boostrapping software installation, namespace creation, and other functions.
  • Automatic Data Fabric CR creation based on scanning node system information and resource profiles. This CR helps determine how many CLDB, ZK, and MFS pods can be created and ensure proportional resource requests relative to node resources or grouped disk profiles. HPE Ezmeral Runtime Enterprise updates the standard “template' Data Fabric CR at cluster creation time. Users may view/download the Data Fabric CR after cluster creation.
  • Auto-registration of Tenant Storage/PVCs, along with clean-up functionality to allow deregistration if needed for another Data Fabric cluster.
  • Data Fabric clusters automatically become the default storageclass for Compute Kubernetes clusters.
  • Gateway hosts (see Gateway Hosts) expose HPE Ezmeral Data Fabric services such as the HPE Ezmeral Data Fabric Control System, Kibana, and Grafana via clickable links in the web interface.
  • User-settable configuration parameters allow fine-tuning a cluster to suit specific needs. See User-Configurable Data Fabric Cluster Parameters.
  • Data Fabric clusters can be expanded by adding additional nodes, as described in Expanding a Data Fabric Cluster. The original cluster size and the number and composition of new node determine whether CLDB, ZK, and/or MFS pods will be added. Once expanded, a Data Fabric cluster cannot be shrunk.
  • HPE Ezmeral Data Fabric packages can be started automatically when creating a Kubernetes cluster in HPE Ezmeral Runtime Enterprise. The user can also select Compute packages to install by clicking the available options during cluster creation.
  • The POSIX client type (“Basic” or “Platinum”) can be specified on a per-node basis.


The following limitations apply to HPE Ezmeral Data Fabric on Kubernetes clusters:

  • Only one HPE Ezmeral Data Fabric on Kubernetes cluster can be created. This one HPE Ezmeral Data Fabric on Kubernetes cluster therefore registers the Tenant Storage and Share for all Kubernetes tenants.
  • Migrating from an integrated/embedded form of HPE Ezmeral Data Fabric (versions 5.1.1. and below) to an HPE Ezmeral Data Fabric on Kubernetes cluster (versions 5.2 and above) requires manual steps. Contact HPE Technical Support for assistance.