Prerequisites for Installing the Container Storage Interface (CSI) Storage Plugin

Lists the prerequisites for installing and using the Container Storage Interface (CSI) Storage Plugin.

Hardware and Software Requirements

To install and use the Container Storage Interface (CSI) Storage Plugin, you must have the following:
Component Supported Versions
HPE Ezmeral Data Fabric File Store 6.1.0 or later. For additional version compatibility information, see CSI Version Compatibility.
Ecosystem Pack (EEP) Any EEP supported by Data Fabric 6.1.0 or later. See EEP Support and Lifecycle Status.
Kubernetes Software 1.17 and later*
OS (Kubernetes nodes) All nodes in the Kubernetes cluster must use the same Linux OS. Configuration files are available to support:
  • CentOS
  • RHEL (use CentOS configuration file)
  • Ubuntu
NOTE
Docker for Mac with Kubernetes is not supported as a development platform for containers that use Data Fabric for Kubernetes.
CSI Driver FUSE and Loopback NFS drivers (implementing the CSI spec with v1.9.0). The download location shows the latest version of the driver.
Sidecar Containers The CSI plugin pod uses:
  • csi-node-driver-registrar — v2.10.1
  • livenessprobe — v2.12.0
The CSI provisioner pod uses:
  • csi-attacher — v4.5.1
  • csi-provisioner — v4.0.1
  • csi-snapshotter — v7.0.2
  • snapshot-controller — v7.0.2
  • livenessprobe — v2.12.0
  • csi-resizer — v1.10.1
POSIX License The Basic POSIX client package is included by default when you install Data Fabric for Kubernetes. The Platinum POSIX client can be enabled by specifying a parameter in the pod specification.

To enable the Platinum POSIX client package, see Enabling the Platinum Posix Client for Kubernetes Interfaces for Data Fabric FlexVolume Driver. For a comparison of the Basic and Platinum POSIX client packages, see Preparing for Installation (HPE Ezmeral Data Fabric POSIX Client).

*Kubernetes alpha features are not supported.

Before You Install

Before installing the Container Storage Interface (CSI) Storage Plugin, note that the installation procedure assumes that the Kubernetes cluster is already installed and functioning normally. In addition:

  1. Ensure that all Kubernetes nodes use the same Linux distribution.

    For example, all nodes can be CentOS nodes, or all nodes can be Ubuntu nodes. A cluster with a mixture of CentOS and Ubuntu nodes is not supported.

  2. Configure your Kubernetes cluster to allow privileged pods by running the following commands:
    $ ./kube-apiserver ...  --allow-privileged=true …
    $ ./kubelet ...  --allow-privileged=true ...
  3. Enable mount propagation to share volumes mounted by one container with other containers in the same pod and other pods on the same node.

    For more information, see Mount Propagation.

  4. Apply CRDs to your Kubernetes cluster if they are not already present:
    Kubernetes 1.26 and Later
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v7.0.2/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v7.0.2/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v7.0.2/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
    Kubernetes 1.20 and Later
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v4.2.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v4.2.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v4.2.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
    Kubernetes 1.19 and Earlier
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-3.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-3.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-3.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
    For more information see Snapshot Controller.
  5. For OpenShift, install the SecurityContextConstraints by applying deploy/openshift/csi-scc.yaml in the mapr-csi GitHub repository:
    oc apply -f deploy/openshift/csi-scc.yaml
  6. Create the state volume-mount path, and update the CSI driver yaml. In prior releases, the state of dynamically provisioned volumes and their snapshots was held in memory. The provisioner would lose this state if the controller pod was restarted or upgraded. After restarts, the provisioner would fail to take snapshots, restore snapshots, resize or clone previously created volumes.

    With the latest version of the CSI driver, the provisioner persists the encrypted state of the dynamically provisioned volumes and their snapshots in a volume on the data-fabric cluster. If the controller pod is restarted, the state is automatically recovered, and operations on previously created volumes work as intended.

    You can change the state volume-mount prefix by updating the --statevolmountprefix=/path/to/dir argument in the mapr-kdfprovisioner image of the CSI driver yaml.

    NOTE
    The directory you specify needs to be read-writable for all users who provision volumes on the Data Fabric cluster using CSI drivers:
    # Create state volume mount path
    hadoop fs -mkdir /apps/k8s
    hadoop fs -chmod 777 /apps/k8s
    
    # Update csi driver yaml
    --statevolmountprefix=/apps/k8s
  7. Understand the number of volume mounts per node that your application requires. The CSI driver default is 20 volume mounts per node. You can modify the number of volume mounts per node by adjusting the value of the maxvolumepernode parameter in the csi-maprkdf-<version>.yaml or csi-maprnfskdf-<version>.yaml file.