Removing Disks from the File System
Explains how to remove disks using either the Control System or the CLI.
About this task
When you remove a disk from the file system, other disks in the storage pool are also removed automatically from the file system and are no longer in use (they are available but off-line). Their disk storage goes to 0%, and they are eligible to be added again to the file system to build a new storage pool. You can either replace the disk and re-add it along with the other disks that were in the storage pool, or just re-add the other disks if you do not plan to replace the disk you removed. See Adding Disks to file system for more information.
maprcli disk
remove
command without the -force 1
option first and
examine the warning messages to make sure you are not removing the disk with
Container ID 1. To safely remove such a disk, perform a CLDB Failover to make one of the other CLDB nodes the primary CLDB, then remove the disk as
normal with addition of the -force 1
option. /opt/mapr/server/fsck
utility
before removing or replacing disks. /opt/mapr/server/fsck
utility with the -r
flag to repair a file system risks data loss. Call
HPE Ezmeral Data Fabric support before
using /opt/mapr/server/fsck -r
.Removing Disks from file system Using the Control System
About this task
Complete the following steps to remove disks using the Control System:
Procedure
- Log in to the Control System and go to the Summary tab in the node information page.
-
Select the disks to remove in the Disks pane and
click Remove Disk(s) from File System.
The Remove Disk(s) from File System confirmation dialog displays.WARNINGOne or more disks you selected may have unreplicated data on it and this action will forcefully remove the disks.
-
Review the list and click Remove Disk.
Wait several minutes while the removal process completes. After you remove the disks, any other disks in the same storage pools are taken offline and marked as available (not in use by HPE Ezmeral Data Fabric).
- Remove the physical disks from the node or nodes according to the correct hardware procedure.
-
From a command line terminal, remove the failed disk log file from the
/opt/mapr/logs
directory.These log files are typically named like this:diskname.failed.info
Removing Disks from file system Using the CLI or REST API
Procedure
-
On the node, determine which disk to remove/replace by examining
Disk entries in the
/opt/mapr/logs/faileddisk.log
file. -
Run the following command, substituting the hostname or IP address for
<host>
and a list of disks for<disks>
maprcli disk remove -disks <disk names> -host <host>
NOTEThis command does not remove a disk containing unreplicated data unless forced.For complete reference information, see
disk remove
. -
Examine the screen output in response to the command you ran in step 2.
For example:
Make a note of the additional disks removed when the disk is removed. For example, the disksmaprcli disk remove -host `hostname -f` -disks /dev/sdd message host disk removed. host1 /dev/sdd removed. host1 /dev/sde removed. host1 /dev/sdf
/dev/sde
and/dev/sdf
are part of the same storage pool and therefore removed along with the disk (/dev/sdd
). -
Confirm that the removed disks do not appear in the
disktab
file. -
Remove the disk log file from the
/opt/mapr/logs
directory.For failed disks, these log files are typically named in the patterndiskname.failed.info
.
What to do next
When you replace a failed disk, add it back to the file system along with the other disks from the same storage pool that were previously removed. Adding only the replacement disk to the file system, results in a non-optimal storage pool layout, which can lead to degraded performance.
Once you add the disks to the file system, the cluster automatically allocates properly-sized storage pools. For example, if you add ten disks, HPE Ezmeral Data Fabric allocates two storage pools of three disks each and two storage pools of two disks each.