Installing and Configuring Spark Operator
This section describes how to install and configure Spark Operator on HPE Ezmeral Runtime Enterprise.
Prerequisites
- Log in as a Kubernetes Cluster Administrator or Platform Administrator in HPE Ezmeral Runtime Enterprise.
About this task
In HPE Ezmeral Runtime Enterprise, you can install Spark Operator using GUI or manually using the Helm chart.
Learn more about supported Spark versions by Spark Operator at Interoperability Matrix for Spark.
Installing Spark Operator Using the GUI
About this task
Install Spark Operator during the Kubernetes Cluster creation step using the HPE Ezmeral Runtime Enterprise GUI. See Creating a New Kubernetes Cluster.
Procedure
- Set up Host Configurations, Cluster Configurations, and Authentication for Kubernetes Cluster.
- In Application Configurations, select Enable Spark Operator.
- Click Next, review the summary of resources to be assigned to Kubernetes cluster.
- To install the Kubernetes cluster, click Submit. This triggers the installation process for Spark Operator.
Results
The GUI installs the Spark Operator.
Starting from HPE Ezmeral Runtime Enterprise 5.4.0, selecting Enable Spark Operator option does not trigger the installation for Livy, Spark History Server, Spark Thrift Server, and Hive Metastore. You must install Livy, Spark History Server, Spark Thrift Server, and Hive Metastore using the GUI or manually using the helm charts.
Installing Spark Operator Using the Helm
Prerequisites
- Install and configure Helm 3.
- Install and configure kubectl.
About this task
Install the Spark Operator by using the Helm chart.
Procedure
helm install -f <path-to-values.yaml-file> <spark-operator-name> ./<path-to-spark-operator-chart>/ \
--namespace <cluster-namespace> \
--set sparkJobNamespace=<cluster-namespace> \
--set webhook.namespaceSelector=hpe.com/tenant=<cluster-namespace> \
--set fullnameOverride=<spark-operator-name>
--set autotix.enable=true
Running the helm install
installs Spark Operator in a cluster
namespace.
Example
helm install -f spark-operator-chart/values.yaml spark-operator-compute ./spark-operator-chart/ \
--namespace compute \
--set sparkJobNamespace=compute \
--set webhook.namespaceSelector=hpe.com/tenant=compute \
--set fullnameOverride=spark-operator-compute