Adding a Disk
This procedure describes adding a disk to a Data Fabric that implements HPE Ezmeral Data Fabric on Kubernetes on HPE Ezmeral Runtime Enterprise.
Prerequisites
-
You know the pod to which you are adding the disk.
-
You know the location (physical or virtual) to which you are adding the disk.
-
Required access rights:
-
Platform Administrator or Kubernetes Cluster Administrator access rights are required to download the admin kubeconfig file, which is needed to access Kubernetes cluster pods (see Downloading Admin Kubeconfig).
-
You must be logged on as the root user on the node that contains the disk and on which the Kubernetes cluster is running.
-
Procedure
-
Add the disk to the storage system.
For more information about this step, refer to the documentation for your server and storage system.
For example, if you are adding storage to a physical server, add the physical disk drives to the disk enclosure.
-
Determine which node the pod is running on.
In the following example, the disk is to be added to pod
mfs‑1
.kubectl describe pod mfs-1 -n mydfcluster | grep Node: Node: mydfnode1-default-pool/192.0.2.75
-
Delete the pod to which you want to add the disk (the pod restarts
automatically).
For example:
kubectl delete pod mfs-1 -n mydfcluster
-
Access the pod to which you want to add the disk.
For example:
kubectl exec -it mfs-1 -n mydfcluster -- /bin/bash
-
Get the current list of disks from the node annotations.
For example:
kubectl describe node mydfnode1-default-pool | grep ssdlist hpe.com/ssdlist: /dev/sdb,/dev/sdc
-
Add the new disk to the
ssdlist
annotation.In the following example, the new disk is
/dev/sdd
:kubectl annotate --overwrite nodes mydfnode-default-pool hpe.com/ssdlist='/dev/sdb,/dev/sdc,/dev/sdd'
-
Verify that the annotations include the disk you added in the previous
step.
For example:
kubectl describe node mydfnode-default-pool | grep ssdlist hpe.com/ssdlist: /dev/sdb,/dev/sdc,/dev/sdd
-
Log in to mfs and verify that the added disk is included in the directory that
contains the logical links to Data Fabric
disks (
/var/mapr/edf-disks/
).For example:
ls -l /var/mapr/edf-disks/ ... lrwxrwxrwx 1 root root 8 Nov 4 15:28 drive_ssd_0 -> /dev/sdb lrwxrwxrwx 1 root root 8 Nov 4 15:28 drive_ssd_1 -> /dev/sdc lrwxrwxrwx 1 root root 8 Nov 4 16:59 drive_ssd_3 -> /dev/sdd ...
In
maprcli
commands, you specify the disk using the internal name that the Data Fabric file system uses to refer to the disk.In the preceding example, the internal name for the
dev/ssd
disk isdrive_ssd_3
. -
Add the new disk to the Data Fabric file
system.
This step reformats the disk. Any data on the disk will be lost.
In HPE Ezmeral Data Fabric on Kubernetes deployments, the
host
parameter ofmaprcli
commands refers to the pod.In the following example, the disk
drive_ssd_3
is being added:maprcli disk add -disks /var/mapr/edf-disks/drive_ssd_3 -host mfs-1.mfs-svc.mydfcluster.svc.cluster.local
-
Verify that the new disk is included in the Data Fabric configuration file:
To display the configuration file, enter the following command:
cat /opt/mapr/conf/disktab
-
Verify that the new disk exists in the Data Fabric file system.
For example, verify that the system displays a result for the following command:
maprcli disk listall | grep mfs-1 | grep sdd
-
Verify that there is a new storage pool that includes the new disk.
To display the list of storage pools, enter the following command:
/opt/mapr/server/mrconfig sp list -v