Troubleshooting the Container Storage Interface (CSI) Storage Plugin
This section describes how to resolve common problems you might encounter when installing and using the Container Storage Interface (CSI) Storage Plugin.
Troubleshooting CSI Driver installation
Run the following commands to display information about the pods that are deployed for the CSI plugin and provisioner:
kubectl get pods -n mapr-csi
Loopback
NFSkubectl get pods -n mapr-nfscsi
The installation is considered successful if the get pods
command shows
the pods in the Running
state. For example, your output should look similar
to the following when the CSI plugin is deployed on three worker nodes:
mapr-csi csi-controller-kdf-0 5/5 Running 0 4h25m
mapr-csi csi-nodeplugin-kdf-2kfrf 3/3 Running 0 4h25m
mapr-csi csi-nodeplugin-kdf-lq5nw 3/3 Running 0 4h25m
mapr-csi csi-nodeplugin-kdf-pkrzt 3/3 Running 0 4h25m
Loopback
NFScsi-controller-nfskdf-0 7/7 Running 0 22h
csi-nodeplugin-nfskdf-5rjt2 3/3 Running 0 18h
csi-nodeplugin-nfskdf-7d9cs 3/3 Running 0 22h
csi-nodeplugin-nfskdf-qw7kg 3/3 Running 0 22h
The preceding output shows the following:
csi-nodeplugin-kdf-*
: Daemonset pods are deployed on all the Kubernetes worker nodes.csi-controller-kdf-0
: A StatefulSet pod is deployed on a single Kubernetes worker node.
csi-nodeplugin-nfskdf-*
: Daemonset pods are deployed on all the Kubernetes worker nodes.csi-controller-nfskdf-0
: A StatefulSet pod is deployed on a single Kubernetes worker node.
Troubleshoot CSI Plugin Deployment Failures
If the pods show a failure in the deployment, run the following kubectl
commands to see the container logs:
kubectl logs <csi-nodeplugin-*> -n mapr-csi -c <nodeplugin-pod-container>
Loopback
NFSkubectl logs csi-controller-nfskdf-0 -n mapr-nfscsi -c <controller-pod-container>
<nodeplugin-pod-container>
with the container that is
failing. You can also run the following kubectl
command to see the
controller logs:
kubectl logs csi-controller-kdf-0 -n mapr-csi -c <controller-pod-container>
Here, replace <controller-pod-container>
with the container that is
failing.Troubleshooting Volume Provisioning
To check for provisioner errors, check the provisioner log:
tail -100f /var/log/csi-maprkdf/csi-provisioner.log
Loopback
NFStail -100f /var/log/csi-maprkdf/csi-nfsprovisioner.log
Troubleshooting Mount Operation
Check the CSI Storage plug-in log for mount or unmount errors:
tail -100f /var/log/csi-maprkdf/csi-plugin.log
Loopback
NFStail -100f /var/log/csi-maprkdf/csi-nfsplugin.log
If you don’t see any errors, see the kubelet logs on the node where the pod is scheduled to run. For specific errors, check the CSI Storage plugin logs.
Troubleshooting CSI Storage Plugin Discovery with kubelet
Check the kubelet path for Kubernetes deployment from the kubelet process running with
--root-dir
. The --root-dir
is a string that contains the
directory path for managing kubelet files (such as volume mounts, etc.,) and defaults to
/var/lib/kubelet
. If the Kubernetes environment has a different kubelet
path, modify the CSI driver deployment .yaml
file with the new path, and
redeploy the CSI Storage Plugin again.
Troubleshooting Snapshot Provisioning
tail -100f /var/log/csi-maprkdf/csi-provisioner.log
If there are no errors, run the following kubectl
command to check the
snapshot:
kubectl describe volumesnapshot.snapshot.storage.k8s.io <snapshot-name> -n <namespace-name>
Here:<snapshot-name>
: Name of the VolumeSnapshot Object defined in yaml.<namespace-name>
: Namespace where the VolumeSnapshot object is created.
Troubleshooting No Space on Disk Error
The devicemapper
storage driver used for Docker allows only 10 GB by
default, resulting in "no space left on device" errors when writing to new directories for a
new volume mount request. If --maxvolumepernode
is configured to be greater
than 20 and the underlying Docker is using the devicemapper
storage driver,
do the following to increase the storage size:
- Change the storage driver to a value other than
devicemapper
, which restricts container storage to 10 GB by default. - Increase the default container storage to more than the default of 10 GB for the
devicemapper
storage driver for the Docker container running on a Kubernetes worker node.
- In the
/etc/sysconfig/docker-storage
file, add--storage-opt dm.basesize=50G
underDOCKER_STORAGE_OPTIONS
section. - Restart Docker.
- Run the following command to confirm that the setting is correctly applied:
docker info | grep "Base Device Size"