Example: Mounting a PersistentVolume for Dynamic Provisioning Using Container Storage Interface (CSI) Storage Plugin

About this task

This example also uses a PersistentVolume. However, unlike the previous example, when you use the dynamic provisioner, you do not need to create a PersistentVolume manually. The PersistentVolume is created automatically based on the parameters specified in the referenced StorageClass.

Dynamic provisioning is useful in cases where you do not want Data Fabric and Kubernetes cluster administrators to create storage manually to store the pod storage state.

The following example uses a PersistentVolumeClaim that references a Storage Class. In this example, a Kubernetes administrator has created a storage class called test-secure-sc for pod creators to use when they want to create persistent storage for their pods. In this example, it is important for the created pod storage to survive the deletion of a pod.

The information on this page is valid for both FUSE POSIX and Loopback NFS plugins. Examples or tables that mention the FUSE POSIX provisioner (com.mapr.csi-kdf) are equally valid for the Loopback NFS provisioner (com.mapr.csi-nfskdf).

To dynamically provision a volume, you must do the following:

Procedure

  1. Generate a user ticket, and create and deploy a ticket secret on the pod. See:
  2. Create the REST secret and deploy the secret on the pod.
    See Configuring a Secret for information about creating and deploying a ticket secret.
  3. Create a StorageClass similar to the following:
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: test-secure-sc
      namespace: test-csi
    provisioner: com.mapr.csi-kdf
    allowVolumeExpansion: true
    reclaimPolicy: Delete
    parameters:
        csiProvisionerSecretName: "mapr-provisioner-secrets"
        csiProvisionerSecretNamespace: "test-csi"
        csiNodePublishSecretName: "mapr-ticket-secret"
        csiNodePublishSecretNamespace: "test-csi"
        restServers: "10.10.10.210:8443"
        cldbHosts: "10.10.10.210:7222"
        cluster: "clusterA"
        securityType: "secure"
        namePrefix: "csi-pv"
        mountPrefix: "/csi"
        advisoryquota: "100M"
        trackMemory: "false"
        logLevel: "error"
        retainLogs: "false"
        startupConfig: "-o allow_other -o big_writes -o auto_unmount -o async_dio -o max_background=24 -o auto_inval_data --disable_writeback"
    
    For more information, see Storage Classes. The following table shows the properties defined in the sample StorageClass:
    Property Description
    apiVersion The Kubernetes API version for the StorageClass spec.
    kind The kind of object being created. This is a StorageClass.
    metadata: name The name of the StorageClass. Administrators should specify the name carefully because it will be used by pod authors to help select the right StorageClass for their needs.
    metadata: namespace The namespace in which the StorageClass runs. This namespace can be different from the namespace used by the PVC and pod, since the StorageClass namespace can be a cross-namespace resource.
    provisioner The provisioner being used. For the FUSE POSIX provisioner, specify com.mapr.csi-kdf. For the Loopback NFS provisioner, specify com.mapr.csi-nfskdf.
    csiNodePublishSecretName The name of the Secret that contains the ticket to use when mounting to the HPE Ezmeral Data Fabric cluster. See Configuring a Secret.
    csiNodePublishSecretNamespace The namespace that contains the Secret. Use the same namespace as the namespace used by the pod.
    csiProvisionerSecretName (deprecated)

    csi.storage.k8s.io/provisioner-secret-name

    The name of the Kubernetes Secret that is used to store Data Fabric administrative credentials (user, password, and ticket information for the Data Fabric webserver). To use the provisioner, you must configure a Secret. See Configuring a Secret.
    csiProvisionerSecretNamespace (deprecated)

    csi.storage.k8s.io/provisioner-secret-namespace

    The namespace for the Secret containing the Data Fabric administrative credentials (user name and password information for a Data Fabric user that has the privileges to create volumes). This namespace can be different from the namespace used by the pod, since a pod author or namespace admin might not be trusted to create administration Secrets for the Data Fabric cluster.
    restServers A space-separated list of Data Fabric webservers. Specify the hostname or IP address and port number of each REST server for the cluster. For fault tolerance, providing multiple REST server hosts is recommended.
    cldbHosts The hostname or IP addresses of the CLDB hosts for the Data Fabric cluster. You must provide at least one CLDB host. For fault-tolerance, providing multiple CLDB hosts is recommended. To specify multiple hosts, separate each name or IP address by a space.
    cluster The Data Fabric cluster name.
    securityType A parameter that indicates whether Data Fabric tickets are used or not used. If Data Fabric tickets are used, specify secure. Otherwise, specify unsecure.
    namePrefix A prefix for the Data Fabricvolume to be created. For example, if you specify PV as the namePrefix, the first dynamically created volume might be named PV.bevefsescr. The provisioner generates random names using lower-case letters. If you do not specify a prefix, the provisioner uses maprprovisioner as a prefix.
    mountPrefix The parent path of the mount in the Data Fabricfile system. If you do not specify a mount prefix, the provisioner mounts your volume under the Data Fabric root.
    NOTE
    User provisioning a volume under this mountPrefix requires read-write permissions to mount the newly created volume; otherwise, the volume provision will fail.
    advisoryquota The advisory storage quota for the volume. The advisoryquota is one of the Data Fabric parameters that you can specify for dynamic provisioning. For more information, see Before You Begin.
    trackMemory Enables memory profiling to debug memory leaks in the FUSE or Loopback NFS process. To be enabled after direction from the DF support team. The default value is false.
    logLevel Sets the log level to one of the following values: error, warn, info, or debug. For the FUSE POSIX driver (com.mapr.csi-kdf), the default value is error. For the Loopback NFS driver (com.mapr.csi-nfskdf), the default value is info.
    retainLogs Retains the logs for the pod on the host machine. The default value is false.
    startupConfig (FUSE POSIX) Release 1.0.2 and later support specifying the startupConfig line. The startupConfig line allows you to specify FUSE configuration parameters that are passed to the fuse.conf file. For the parameters that can be passed, see Configuring the HPE Ezmeral Data Fabric FUSE-Based POSIX Client.
    If no startupConfig line is specified, these default startup settings are used:
    "-o allow_other -o big_writes -o auto_unmount"

    The default settings allow other users to access the mount point, enable writes larger than 4 KB, and automatically unmount the file system when the process is terminated.

    The following example includes the three default settings and adds some additional settings (shown in bold):
    startupConfig: "-o allow_other -o big_writes -o auto_unmount -o async_dio 
    -o max_background=24 -o auto_inval_data --disable_writeback"
    
    The additional settings enable asynchronous direct I/O, set the maximum number of asynchronous requests to 24, automatically invalidate the kernel FUSE cache for any data change that causes a change in the files, and disable the writeback cache.
    startupConfig (Loopback NFS) The startupConfig line allows you to specify configuration parameters that are passed to the nfsserver.conf file. For the parameters that can be passed, see nfsserver.conf. The startupConfig values supported for Loopback NFS are all the configs supported in the nfsserver.conf file. Values must be separated by a space. If no startupConfig line is specified, these default startup settings are used:
    startupConfig: "NFS_HEAPSIZE=1024 DrCacheSize=1024000"
    numrpcthreads Sets the number of RPC threads for the Data Fabric client. The default value is 1. The maximum value is 4. Use this option to increase throughput with FUSE basic or container licenses or with the Loopback NFS driver.
    startDelay
    Sets the wait time after launching the FUSE or Loopback NFS processes and before making the mount available to the application pod. The default value is 5.
    CAUTION
    Setting the value below 2 to 3 seconds may affect the availability of the mount.
  4. Configure a PersistentVolumeClaim similar to the following:
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: test-secure-pvc
      namespace: test-csi
    spec:
      storageClassName: test-secure-sc
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5G
    The following table shows the properties defined in the sample PersistentVolumeClaim:
    Property Description
    apiVersion The Kubernetes API version for the pod spec.
    kind The kind of object being created. This is a PersistentVolumeClaim (PVC).
    metadata: name The PVC name.
    metadata: namespace The namespace in which the PVC runs. This should be the same namespace used by the pod.
    storageClassName The name of the storage class requested by the PersistentVolumeClaim. For more information, see Dynamic Provisioning and Storage Classes.
    accessModes How the PersistentVolume is mounted on the host. For more information, see Access Modes.
    requests: storage The storage resources being requested, or that were requested and have been allocated. The pod author can use this parameter to specify how much quota is needed for the Data Fabric volume. For the units, see Resource Model.
  5. Create the pod spec similar to the following:
    apiVersion: v1
    kind: Pod
    metadata:
      name: test-secure-pod
      namespace: test-csi
    spec:
      containers:
      - name: busybox
        image: busybox
        args:
        - sleep
        - "1000000"
        resources:
          requests:
            memory: "2Gi"
            cpu: "500m"
        volumeMounts:
        - mountPath: /mapr
          name: maprflex
      volumes:
        - name: maprflex
          persistentVolumeClaim:
            claimName: test-secure-pvc
    The following table shows the properties defined in the sample pod spec:
    Property Description
    apiVersion The Kubernetes API version for the pod spec.
    kind The kind of object being created. For clarity, this example uses a naked Pod. Generally, it is better to use a Deployment, DaemonSet, or StatefulSet for high availability and ease of upgrade.
    metadata: name The pod name.
    metadata: namespace The namespace in which the pod runs. It should be the same namespace in which the PVC runs.
    volumeMounts: mountPath A directory inside the container that is designated as the mount path.
    volumeMounts: name A name that you assign to the Kubernetes volumeMounts resource. The value should match Volumes: name.
    Volumes: name A string to identify the name of the Kubernetes volumes resource. The value should match volumeMounts: name.
    persistentVolumeClaim: claimName The name of the PersistentVolumeClaim (PVC). For more information, see PersistentVolumeClaims.
  6. Deploy the .yaml file on the pod by running the following command:
    kubectl apply -f <filename>.yaml
    For each pod mount request, the POSIX client starts with the pod's hostname and new generated hostid, which is tracked on the Data Fabric cluster. You can run the node list command on the cluster to determine the number of POSIX clients. For example:
    FUSE POSIX
    # maprcli node list -clientsonly true
    clienttype clienthealth hostname ip lasthb id
    posixclientbasic Inactive 4f3d34fe-2007-11e9-8980-0cc47ab39644 10.10.102.94,172.17.0.1,192.168.28.0 11225 7407394893618656436
    posixclientbasic Inactive 7906d011-200f-11e9-84c0-0cc47ab39644 10.10.102.94,172.17.0.1,192.168.28.0 8174 7544602061076655421
    posixclientbasic Inactive 9ed61912-2004-11e9-8980-0cc47ab39644 10.10.102.92,172.17.0.1,192.168.184.128 11224 2540810767207593086
    posixclientbasic Inactive c35ab639-2010-11e9-84c0-0cc47ab39644 10.10.102.94,172.17.0.1,192.168.28.0 7568 7947067275504513691
    posixclientbasic Active e5dc10e8-2012-11e9-84c0-0cc47ab39644 10.10.102.94,172.17.0.1,192.168.28.0 18 5849529086453778130
    Loopback NFS
    # maprcli node list -clientsonly true
    clienttype    clienthealth  hostname                                  ip                              lasthb  id
    LOOPBACK_NFS  Active        3ae5bb79-0aa1-431d-a17b-2cf0ef692060      10.163.160.104,192.168.252.65   1       3740102597316282880
    LOOPBACK_NFS  Active        8c096a3c-0424-466a-8eda-6a61999ac3e4      10.163.160.103,192.168.19.192   1       6892565781040807680
    LOOPBACK_NFS  Active        ae92fe4b-a3c9-4cb3-8858-c688dd6e0bdc      10.163.160.103,192.168.19.192   1       1038944668644089888
    LOOPBACK_NFS  Active        fe855a47-bf66-4b72-8f28-c713b5ec4004      10.163.160.105,192.168.153.128  1       5958455784535826944

Example

Full example, which includes PV, PVC, and Pod configuration
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: test-secure-sc
  namespace: test-csi
provisioner: com.mapr.csi-kdf
parameters:
    csiProvisionerSecretName: "mapr-provisioner-secrets"
    csiProvisionerSecretNamespace: "test-csi"
    csiNodePublishSecretName: "mapr-ticket-secret"
    csiNodePublishSecretNamespace: "test-csi"
    restServers: "10.10.10.210"
    cldbHosts: "10.10.10.210"
    cluster: "clusterA"
    securityType: "secure"
    namePrefix: "csi-pv"
    mountPrefix: "/csi"
    advisoryquota: "100M"
--
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-secure-pvc
  namespace: test-csi
spec:
  storageClassName: test-secure-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5G
--
apiVersion: v1
kind: Pod
metadata:
  name: test-secure-pod
  namespace: test-csi
spec:
  containers:
  - name: busybox
    image: busybox
    args:
    - sleep
    - "1000000"
    resources:
      requests:
        memory: "2Gi"
        cpu: "500m"
    volumeMounts:
    - mountPath: /mapr
      name: maprflex
  volumes:
    - name: maprflex
      persistentVolumeClaim:
        claimName: test-secure-pvc