Requirements for HPE Ezmeral Data Fabric on Kubernetes — Footprint-Optimized Configurations

Describes available footprint-optimized configurations of HPE Ezmeral Data Fabric on Kubernetes and the requirements for deploying a footprint-optimized configuration in non-production environments.

Footprint-Optimized Configurations

Footprint-optimized configurations implement HPE Ezmeral Data Fabric on Kubernetes on a smaller set of nodes and support a subset of the features (services) of the high-performance production configuration. Footprint-optimized configurations are intended for non-production environments such as for development, testing, and proof-of-concept demonstrations.

Footprint-optimized configurations are not supported for use in production environments.

There are two supported footprint-optimized configurations of HPE Ezmeral Data Fabric on Kubernetes:

Combined Masters-Workers Configuration

The smallest configuration has three nodes, each of which performs Kubernetes master functions and Data Fabric storage functions. Optionally, you can add Data Fabric worker nodes or compute nodes to this configuration. To run compute jobs, at least one compute node is required.

For information about the requirements for this configuration, see Combined Masters-Workers Configuration.

Dedicated Control Plane Configuration

This configuration is similar to the high-performance production configuration. This configuration has three dedicated control plane (master) Kubernetes nodes, and a minimum of three Data Fabric worker nodes. Optionally, you can add Data Fabric worker nodes or compute nodes to this configuration. To run compute jobs, at least one compute node is required.

For information about the requirements for this configuration, see Dedicated Control Plane Configuration.

Combined Masters-Workers Configuration

This configuration is the smallest configuration. The smallest configuration is a total of three nodes, each of which performs Kubernetes master functions and Data Fabric storage functions. Optionally, you can add Data Fabric worker nodes or compute notes to this configuration. To run compute jobs, at least one compute node is required.

This configuration is not supported for use in production environments, even if you add worker nodes. Migration of this configuration to the high-performance production configuration is not supported.

Table 1. Combined Masters-Workers Configuration - Minimum Requirements for Non-production Environments
Configuration Recommended Minimum CPU Cores Recommended Minimum RAM
3 Masters/Workers
Masters/Workers

All nodes tagged Datafabric=yes

32 per node (96 cores total) 64GB per node (192GB total)
One Compute Node

Required if running compute jobs on the cluster.

No Datafabric tag.

32 per node (32 cores total) 64 GB per node (64GB total)
See Requirements for HPE Ezmeral Data Fabric on Kubernetes — Recommended Configuration for information about the following:
  • Ephemeral Storage Requirements
  • Persistent Storage Requirements
  • Other Requirements, such as CSI drivers

Dedicated Control Plane Configuration

This configuration is similar to the high-performance production configuration. This configuration has three dedicated control plane (master) Kubernetes nodes, and a minimum of three Data Fabric worker nodes. Optionally, you can add Data Fabric worker nodes or compute nodes to this configuration. To run compute jobs, at least one compute node is required.

With enough additional worker nodes, for a total of five or more worker nodes, this configuration can be converted into the high-performance production configuration. Changes to the CR are required. Contact your Hewlett Packard Enterprise representative.

Table 2. Dedicated Control Plane Configuration - Minimum Requirements for Non-Production Environments
Configuration Recommended Minimum CPU Cores Recommended Minimum RAM
3 Masters + 3 Workers
Masters

Requirements are the same as the general requirements for Kubernetes Master hosts.

No Datafabric tag.

4 per node (12 cores total) 32GB per node (96GB total)
Workers

All nodes tagged Datafabric=yes.

32 per node (96 cores total) 64GB per node (192GB total)
1 Compute Node

Required if running compute jobs on the cluster.

No Datafabric tag.

32 per node (32 cores total) 64 GB per node (64GB total)
See Requirements for HPE Ezmeral Data Fabric on Kubernetes — Recommended Configuration for information about the following:
  • Ephemeral Storage Requirements
  • Persistent Storage Requirements
  • Other Requirements, such as CSI drivers

Limitations of Footprint-Optimized Configurations

  • No Hive Metastore
  • No Monitoring or Metrics Capabilities - Monitoring capabilities are not available as Grafana and openTSDB are not available. However, Metrics and Monitoring pods may be added to the cluster if enough resources are available.

Footprint-Optimized Configuration CR

Footprint-optimized configurations use a different CR than the high-performance production configurations. The CR used for footprint-optimized configurations configure pods differently and omit monitoring and metrics services. For a sample CR, see the following:

https://github.com/HPEEzmeral/df-on-k8s/blob/main/examples/p1.5.0/3node/3node_core_objectstore_gateway.yaml