config

Lists configuration values for the Data Fabric cluster.

Configuration Fields

The following fields are configurable.

cldb.balancer.disk.max.switches.in.nodes.percentage
Default Value: 10
The maximum number of containers that can be balanced in parallel by the disk balancer. The value is a percentage of the number of nodes in the system.
cldb.disk.balancer.enable
Default Value: 1 (Disk Balancer is enabled)
Enables (1) or disables (0) the Disk Balancer.
cldb.balancer.disk.sleep.interval.sec
Default Value: 120
The sleep interval (in seconds) between two successive runs of the Disk Balancer.
cldb.balancer.disk.threshold.percentage
Default Value: 70
Percentage of used space that causes containers in a storage pool to be distributed across other less used storage pools.
cldb.balancer.logging
Default Value: 0
Disables (0) or enables (1) the logging of messages in the Disk Balancer and Role Balancer.
cldb.balancer.role.max.switches.in.nodes.percentage
Default Value: 10
The percentage (of the number of nodes in the system) to use to determine the maximum number of containers whose roles (Masters and Tails) are balanced in parallel by the Role Balancer.

For example, suppose there are 500 nodes and the value of this parameter is 10(%). The number of containers whose roles are balanced in parallel is (10/100)*500=50.

cldb.balancer.role.paused
Default Value: 1
Enables (0) or Disables (1) the Role Balancer.
cldb.balancer.role.sleep.interval.sec
Default Value: 900
The sleep interval (in seconds) between two successive runs of the Role Balancer.
cldb.balancer.startup.interval.sec
Default Value: 1800
The initial startup delay (in seconds) of the Role Balancer for existing clusters.
cldb.cluster.almost.full.percentage
Default Value: 90
The percentage at which the CLUSTER_ALARM_CLUSTER_ALMOST_FULL alarm is triggered.
cldb.container.alloc.selector.algo
Default Value: 0
The allocation algorithm to use when creating new containers. The value can be one of:
  • 0 - indicates Round Robin algorithm if the number of nodes is less than or equal to 100, Randomized algorithm otherwise.
  • 1 - indicates Round Robin algorithm. If selected, containers are allocated across nodes in a topology in a round robin fashion.
  • 2 - indicates Randomized algorithm. If selected, containers are allocated across nodes in a randomized way.
cldb.container.assign.buffer.sizemb
Default Value: 1024
The size of the container (in MB) that should be used as a buffer. When allocating a new container, this size is deducted from the maximum container size.
NOTE
When you modify the value of cldb.container.sizemb, check and update the value of cldb.container.assign.buffer.sizemb to prevent new containers from being created when existing containers are not full.
cldb.container.create.diskfull.threshold
Default Value: 85
The percentage of space on a file server to use to classify the file server as full.
cldb.container.sizemb
Default Value: 32768

The maximum size for containers (in MB). This is a soft limit.

NOTE
When cldb.container.sizemb value is modified, check and update the value of cldb.container.assign.buffer.sizemb to prevent new containers from being created when existing containers are not full.
cldb.default.chunk.sizemb
Default Value: 256
The size of each chunk (in MB) that make up a file in the Data Fabric file system.
cldb.default.volume.topology
Default Value: /data
The default topology for new volumes.
cldb.dialhome.metrics.file.rotation.period
Default Value: 365
The retention period of the files (in days) that is used to record Dialhome metrics. Files that are past their retention period are automatically deleted.
cldb.disable.alarm.history
Default Value: 0 (false)
Set this to 1 (true) to disable CLDB alarm history, as tracking and fetching the alarm history can degrade the performance of CLDB on large clusters.
cldb.fs.mark.rereplicate.sec
Default Value: 3600
The number of seconds that a node can fail to heartbeat before it is considered dead. Once a node is considered dead, the CLDB re-replicates any data contained on the node.
cldb.fs.reregistration.wait.time
Default Value: 15
The amount of time (in minutes) to wait before checking for inactive nodes.
NOTE
Reduce the value to raise the No Heartbeat Alarm without delay, after CLDB failover. To avoid spurious alarms, do not reduce this value below 5 (minutes).
cldb.log.fileserver.timeskew.interval.mins
Default Value: 60
The frequency (in minutes) at which CLDB should log messages about the time skew on the file server.
cldb.max.parallel.resyncs.star
Default Value: 3
The number of container replicas that can resync in parallel from the source for low-latency (star-replicated) volumes.
cldb.max.snapshots.per.volume
Default Value: 4096
The maximum number of snapshots that you can create for a volume. CLDB will fail snapshot creation once the number of snapshots reaches this limit. Increasing this value has performance implications. This should only be changed in consultation with the HPE Data Fabric support team.
cldb.mfs.heartbeat.timeout.multiple
Default Value: 10
Specifies a multiple heartbeat timeout. For small clusters, the heartbeat interval is 1 second and the multiple is 10 by default, which makes the heartbeat timeout 10 seconds.
cldb.min.fileservers
Default Value: 1
The number of file servers hosting the CLDB volume that is required for the master CLDB to complete the bootstrap process.
cldb.num.active.cg.containers
Default Value: 20
Number of containers to be assigned for a CG assign request. The value can be any integer between 0 and 100.
cldb.pbs.access.control.enabled
Default Value: 1
Enables and disables policy access controls (ACEs set in security policies) at the cluster level. When set to 0, the system does not enforce security policy ACEs for data operations in the cluster. See Disabling Policy Access Controls at the Cluster-Level for additional information.
cldb.pbs.audit.only.policy.check
Default Value: 0
Set the value to 1 to enable audit-only policy checks (permissive mode). Permissive mode is useful during initial deployment when testing security policies. When permissive mode is enabled, the volume-level enforcementmode option PolicyAceAuditAndDataAce can be set. In this mode:
  • Resource-level ACEs are enforced.
  • If security policies are tagged to data objects, the security policies are checked for access; any access denied events will be audited, but access will be allowed.
See Setting Global Configuration Options for Policy-Based Security for additional information.
cldb.pbs.max.security.policy
Default Value: 10000
Maximum number of configured security policies allowed. Prevents users from arbitrarily creating numerous security policies, which could impact performance.
cldb.pbs.global.master
Default Value: 0
Sets the master security policy cluster for the global namespace. You can configure a cluster to perform one of the following roles:
  • Master — A master security policy cluster is required to create and manage security policies. Only one master security policy cluster can exist.
  • Member — On a cluster designated as Member, you can view the security policies available and apply them to data objects.
By default, the host is set to member (0) upon a new installation or upgrade. To set the host to master, and enable the creation and modification of security policies, set the value of this property to 1.

For more information, see the config save.

cldb.replication.manager.critical.paused
Default Value: 0
Disables (0) or enables (1) the processing of critically under-replicated containers. If enabled, the critically under-replicated containers are processed on a priority basis to increase the number of copies.
cldb.replication.manager.max.resyncs.in.nodes.percentage
Default Value: 1200
The number of containers that can be replicated in parallel, expressed as a percentage of the number of active nodes. If the value is 1200, the number of containers that can be replicated is 12 times the number of active nodes.
cldb.replication.manager.over.paused
Default Value: 0
Disables (0) or enables (1) the processing of over-replicated containers. Over-replicated containers are processed to delete extra copies, which is when the number of copies is more than the desired replication factor.
cldb.replication.manager.start.mins
Default Value: 15
The delay (in minutes) between CLDB startup and replication manager startup, to allow all nodes to register and heartbeat.
cldb.replication.max.in.transit.containers.per.sp
Default Value: 4
The maximum number of containers that can be in transit on a storage pool (SP). Containers that serve either as the source or destination of a resync operation are considered as being in ‘transit’.
cldb.replication.sleep.interval.sec
Default Value: 15
The sleep duration (in seconds) between consecutive runs of the Replication Manager.
cldb.replication.tablescan.interval.sec
Default Value: 120
The sleep duration (in seconds) between consecutive runs of the Replication Scanner. While the Replication Scanner classifies containers into different buckets, the Manager thread either replicates or removes additional copies.
cldb.rm.wait.rack.violated.fork.copy.mins
Default Value: 720
The buffer time (in minutes) after which all container copies found on the same rack are fixed.
cldb.rm.wait.fork.on.same.rack.mins
Default Value: 180
The time (in minutes) to defer creating containers on the same rack, for critically under-replicated containers, if there are at least two copies of the containers.
cldb.security.user.ticket.duration.seconds
Default Value: 1209600
The length of time (in seconds) before the user ticket (generated using the maprlogin password command) expires.
cldb.security.user.ticket.max.duration.seconds
Default Value: 2592000
The maximum amount of time (in seconds) allowed for the user ticket (generated using the maprlogin password command).
cldb.security.user.ticket.renew.duration.seconds
Default Value: 2592000
The length of time (in seconds) to renew the user ticket (generated using the maprlogin password command).
cldb.security.user.ticket.renew.max.duration.seconds
Default Value: 7776000
The maximum duration allowed for a user ticket (generated using maprlogin password command) renewal.
cldb.snapshot.restore.on.volume.unmount.only
Default Value: 1 (true)
Indicates whether the Snapshot Restore operation is allowed without first checking whether the volume is unmounted or not.

By default, the volume restore operation is allowed only if the volume is unmounted, ensuring that no application is accessing any data in the volume.

Set this flag to 0 (false) to perform the restore operation in a single step, without verifying whether the volume is unmounted or not.

To set this flag to 0, run:

/opt/mapr/bin/maprcli config \
 save -values '{"cldb.snapshot.restore.on.volume.unmount.only":"0"}' -json
cldb.topology.almost.full.percentage
Default Value: 90
The threshold percentage that is used to raise alarms when the used space on the nodes of a topology exceed a certain percentage of total space.
cldb.volume.epoch
Default Value: Not Applicable
The starting epoch of a new Container. Epoch is used internally in the selection of the master container.
cldb.volumes.namespace.default.min.replication
Default Value: 2
The minimum replication factor for the name container. Containers with fewer copies than this value are replicated on a priority basis.
cldb.volumes.namespace.default.replication
Default Value: 3
The desired replication factor for the name container.
mapr.fs.nocompression
Default Value: "bz2,gz,tgz,tbz2, zip,z,Z,mp3,jpg, jpeg,mpg,mpeg,avi, gif,png,lzo,jar"
The file types that should not be compressed. See File Extensions of Compressed Files.
mapr.fs.permissions.supergroup
Default Value: root
The super group of the Data Fabric file system layer.
mapr.fs.permissions.superuser
Default Value: mapr
The super user of the Data Fabric file system layer.
mapr.targetversion
Default Value: Not Applicable
The configuration variable to set the current version of the Data Fabric distribution. Failing to set this variable on an upgrade causes alarms to be missed when all the nodes in a cluster are not at the same version of the software.
mfs.db.parallel.copyregions
Default Value: Not Applicable
The number of parallel copy regions per MFS instance. Setting this field to a larger value increases the parallelism for data transfers during index updates, CDC propagation, and table replication. A larger value increases the transfer rate and reduces the initial synchronization time, but uses more system resources. The latter may impact the response time and performance of applications that read data from the same nodes.
mfs.high.memory.alarm.threshold
Default Value: 110 (percentage of allocated memory)
On initialization, the Data Fabric file system is allocated a certain amount of memory. There is some additional headroom that can be used if the Data Fabric file system is under memory pressure. However, if the Data Fabric file system exceeds the high memory threshold (default 10% over the allocated memory, that is 110%), the High FileServer Memory Alarm is raised. This threshold can be 8% to 30% over the allocated memory (that is 108% to 130%) .
mfs.feature.db.json.support
Default Value:
  • 1 for new Data Fabric installations
  • 0 for upgraded Data Fabric installations
Disables (0) or enables (1) Data Fabric streams and support in HPE Ezmeral Data Fabric Database for JSON documents and tables.
mfs.feature.devicefile.support
Default Value: 1
Disables (0) or enables (1) usage of Named Pipes over NFS.
mfs.resync.disk.throttle.factor
Default Value:20
The factor affects the wait time for Data Fabric file system during resync operations, to allow for other disk I/O operations to happen in tandem. The configuration variable can be used to determine and throttle the speed of disk I/O during resync operations. Increasing the value of the mfs.resync.disk.throttle.factor decreases the wait time and decreases throttling of disk bandwidth during resync operations, and vice-versa. If you wish to disable disk bandwidth throttling, set the value for mfs.resync.disk.throttle.factor to 10000 or higher.
WARNING
When throttling is disabled, unthrottled resync operations can cause clients accessing hosts involved in the resync operations to be starved of disk bandwidth.
mfs.resync.network.throttle.factor
Default Value: 20
The factor affects the wait time for Data Fabric file system during resync operations, to allow for other network operations to happen in tandem. The configuration variable can be used to determine and throttle the network speed during resync operations. Increasing the value of the mfs.resync.network.throttle.factor decreases the wait time and decreases throttling of network bandwidth during resync operations, and vice-versa. If you wish to disable network bandwidth throttling, set the value for mfs.resync.network.throttle.factor to 10000 or higher.
WARNING
When throttling is disabled, unthrottled resync operations can cause clients accessing hosts involved in the resync operations to be starved of network bandwidth.
pernode.numcntrs.alarm.thr
Default Value: 50000
The maximum number of Read/Write (RW) containers on each node beyond which performance may not be optimal. The optimal number for RW and snapshot containers combined is 10 times the value of this parameter.
mastgateway.recallexp.opt.enabled
Default Value: 1

The configuration variable controls the enabling or disabling of recall expiry optimization for non-large containers. When the value is set to 1, recall expiry optimization is enabled for non-large containers and vice-versa.

If recall expiry optimization is enabled, the MAST gateway performs the recall expiry operation.

Recall expiry optimization can be disabled by setting the value of the mastgateway.recallexp.opt.enabled variable to 0.

mastgateway.recallexp.opt.minpurgemb
Default Value: 8 MB
Recall threshold value in MB. used in conjunction with mastgateway.recallexp.opt.enabled. For non-large containers, recall expiry is run only if recall expiry optimization is enabled, and the size of recalled data on the container as determined by MAST gateway is larger than this configured threshold.
mastgateway.recallexp.opt.largenuminodes.minpurgemb
Default Value: 2 GB
The configuration variable is used to configure a recall-threshold value. For large containers, recall expiry is run only if recall expiry optimization is enabled, and the size of recalled data on the container as determined by MAST gateway is larger than this configured threshold.
mastgateway.offload.opt.largenuminodes
Default Value: 8 million
This configuration variable is used to define the criterion for the size of a container to be a large container. When the number of inodes in a container is greater than the value specified in the configuration variable, the container is classified as large container.
mastgateway.recallexp.opt.largenuminodes.enabled
Default Value: 1
The configuration variable is used to enable/disable recall expiry optimization for large containers. When the value is set to 1, the recall expiry optimization for large containers is enabled. When the value of the variable is set to 0, the recall expiry optimization for large containers is disabled.
mastgateway.ctc.opt.largenuminodes.enabled
Default Value:1

This configuration variable is used to enable/disable large container compaction optimization. Compaction optimization for large containers can be disabled by setting this configuration variable to 0.

mastgateway.ctc.opt.largenuminodes.skipqualifiedctrs.enabled
Default Value: 1
When this configuration variable is set to 1, compaction is skipped for any large container(namespace container/data container) that has garbage of size higher than the value of mastgateway.ctc.opt.largenuminodes.threshmb. When the compaction is skipped in such a manner, the alarm, 'VOLUME_ALARM_COMPACTION_SKIPPED_LARGE_CONTAINER', is raised. The configuration variable enables administrators to decide whether or not to allow running scheduled compaction on large containers, since compaction on large containers can take time. The compaction can be run manually at a suitable time, such as non-peak hours. Refer to Running the Compactor Using the CLI and REST API for details on running the compactor manually via CLI or REST.
mastgateway.ctc.opt.largenuminodes.threshmb
Default Value: 2 GB

This configuration variable represents the garbage threshold for large containers. For a large container, if the garbage size to reclaim is less than mastgateway.ctc.opt.largenuminodes.threshmb, compaction is skipped for such container.

mastgateway.offload.opt.largenuminodes.mindatamb
Default Value: 2 GB
This configuration variable represents the minimum data to offload threshold for large containers. For a large container, if the size of data to offload is less than mastgateway.offload.opt.largenuminodes.mindatamb, offload is not be triggered on the container.