Static and Dynamic Provisioning Using FlexVolume Driver
Describes static and dynamic storage provisioning using the FlexVolume driver on a Kubernetes cluster.
Kubernetes makes a distinction between static and dynamic provisioning of storage.
In static provisioning, a Data Fabric administrator first creates Data Fabric volumes (mount points) and then ensures that they are mounted. A Kubernetes administrator exposes these Data Fabric mount points in Kubernetes through Kubernetes PersistentVolumes. In a typical static-provisioning scenario, a Pod author requests that a Kubernetes administrator create a PersistentVolume that references an existing Data Fabric mount point with a dataset that the Pod author is interested in. This PersistentVolume references the FlexVolume plug-in. The FlexVolume plug-in mounts and unmounts Data Fabric mount points for the requesting Pod.
In dynamic provisioning, a Kubernetes administrator creates a set of StorageClasses for Pods to invoke. Each StorageClass has a predefined set of storage characteristics. Examples of these characteristics include the Data Fabric volume advisory quota size and snapshot rules. The Pod creator searches the predefined Storage Classes for the one that best matches the creator's requirements. When the Pod references this StorageClass through a PersistentVolumeClaim, the StorageClass calls the Dynamic Provisioner to allocate storage for the requesting Pod dynamically.
Static Provisioning Implementation
To accomplish static provisioning, the KDF FlexVolume plug-in is deployed to all nodes in the Kubernetes cluster via a Kubernetes DaemonSet. The volume plug-in uses the Basic or Platinum POSIX client to mount the Data Fabric file system. The information that the POSIX client uses to connect to Data Fabric is contained in a Kubernetes Volume or PersistentVolume. A Data Fabric ticket inside a Secret, referenced by the Kubernetes Volume or PersistentVolume specification, is used to pass secure data to the file system.
Dynamic Provisioning Implementation
To accomplish dynamic provisioning, the KDF provisioner is deployed as a Kubernetes Deployment to a single node in the Kubernetes cluster. The provisioner requests the creation of Data Fabric volumes when a container is launched. You can scale your provisioner deployment to multiple nodes for high availability. If a provisioner Pod is deleted, a new provisioner is started on another worker node in the cluster.
A Kubernetes Administrator must configure at least one storage class with Data Fabric parameters (for example, mirroring, snapshots, quotas, and other parameters) for use during creation of the Data Fabric volume. The storage class passes Data Fabric administrative credentials to the provisioner through a Kubernetes Secret. Security for the provisioner is handled through role-based access control (RBAC) in Kubernetes.