Lists the parameters of the MFS configuration file.
The configuration file /opt/mapr/conf/mfs.conf
specifies the following
parameters about the file system server on each node.
WARNING
You must restart the File Server after making changes to this file.
Parameters
mfs.server.ip
- Default Value: Not applicable
- Description: IP address of the File Server. For example,
192.168.10.10
.
mfs.server.port
- Default Value: 5660
- Description: Port used for communication with the server.
mfs.cache.lru.sizes
- Default Value:
- For version 4.0.1:
inode:6:log:6:meta:10:dir:40:small:15
- For version 4.0.2 and later versions:
inode:3:meta:6:small:27:dir:15:db:20:valc:3
- Description: LRU cache configuration. See the section, Notes on LRU Cache
Configuration for more information.
mfs.on.virtual.machine
- Default Value: false
- Description: Specifies whether the file system is
running on a virtual machine.
mfs.io.disk.timeout
- Default Value: 60 seconds
- Description: Timeout, in seconds, after which a disk is considered failed and
taken offline. You can increase the timeout to tolerate slow disks.
mfs.max.disks
- Default Value: 48
- Description: Maximum number of disks supported on a single node.
mfs.max.logfile.size.in.mb
- Default Value: 1000 MB
- Description: The maximum amount of disk space that the MFS logs can consume
before the oldest log file is deleted; based on the following calculation:
maxSizePerLogFile = maxLogSize / MAX_NUM_OF_LOG_FILES
where
-
maxLogSize =
total amount of space that MFS log
files can consume
-
MAX_NUM_OF_LOG_FILES =
total number of MFS log files
mfs.max.resync.count
- Default Value: 16
- Description: The number of parallel resync operations.
mfs.subnets.whitelist
- Default Value: Not Applicable
- Description: A list of subnets (up to 256 characters) that are allowed to make
requests to the File Server service and access data on the cluster.
mfs.disk.iothrottle.count
- Default Value: 100
- Description: The maximum number of outstanding requests on disk.
NOTE
You can
disable throttling by setting a high value. This option is disabled if you set
the value for
mfs.disk.is.ssd
to
1
.
mfs.disk.resynciothrottle.factor
- Default Value: 20
- Description: Controls the amount of time to wait before submitting a request to
disk. Increasing this value reduces the wait time, and decreasing this value
increases the wait time. For example, setting the value to 40, halves the wait
time, while setting the value to 10, doubles the wait time.
mfs.network.resynciothrottle.factor
- Default Value: 20
- Description: Controls the amount of time to wait before sending a resync
operation over the network. Increasing this value reduces the wait time, and
decreasing this value increases the wait time. For example, setting the value to
40, halves the wait time, while setting the value to 10, doubles the wait
time.
mfs.ssd.trim.enabled
- Default Value: 0
- Description: Set this parameter to
1
to enable TRIM operations for SSD devices.
NOTE
Enable TRIM only if it is recommended by the SSD vendor.
mfs.disk.is.ssd
- Default Value: 0
- Description: Specifies whether (
1
) or not (0
)
the drives are SSD. If the value is 0
, the drives are assumed to
be rotations. If the value is 1
, the noop scheduler on the SSD is
automatically enabled, and I/O throttling is disabled.
mfs.mem.debug.enabled
- Default Value: 0
- Description: Specifies whether file server should (1) or should not (0) track all
memory allocations. The default value is
0
. If value is
1
, you can determine the root cause for high memory
allocation, or determine the component consuming the most memory.
mfs.numrpcthreads
- Default Value: 2
- Description: Specifies the number of RPC threads per MFS instance. The valid range of
values is from 1 to 4.
mfs.db.max.concurrent.internal.ops
- Default Value: 73728 (72 * 1024)
- Max Value: 131072 (128 *1024)
- Min Value: 36864 (36*1024)
- Description: Regulates how many BatchGet operations can run in parallel when
secondary indexes are present on the table. PUT operations on tables with
secondary indexes convert to BatchGet operations on the tables. PUT operations
that convert to a high volume of BatchGets can degrade performance. BatchGet
operations are spread equally across three threads (73728/3). Run
mrconfig dbinfo threads
to
evaluate the throttling queue for each thread.
mfs.num.compress.threads
- Default Value: 1
- Description: Reserved for internal use.
mfs.max.aio.events
- Default Value: 5000
- Description: Reserved for internal use.
mfs.disable.periodic.flush
- Default Value: 0
- Description: Reserved for internal use.
mfs.ignore.container.delete
- Default Value: 0
- Description: Reserved for internal use.
mfs.ignore.readdir.pattern
- Default Value: 0
- Description: Reserved for internal use.
mfs.disable.IO.affinity
- Default Value: 0
- Description: Reserved for internal use.
mfs.deserialize.length
- Default Value: 8192
- Description: Reserved for internal use.
mfs.enable.nat
- Default Value: 0
- Description: Reserved for internal use.
mfs.bulk.writes.enabled
- Default Value: 0
- Description: Reserved for internal use.
Example
mfs.server.ip=192.168.10.10
mfs.server.port=5660
mfs.cache.lru.sizes=inode:3:meta:6:small:27:dir:15:db:20:valc:3
mfs.on.virtual.machine=0
mfs.io.disk.timeout=60
mfs.max.disks=48
Notes on LRU Cache ConfigurationThe cache values are expressed
as percentages, which vary based on the expected size of the data that the node is
required to cache. The goal is to achieve a state in which most of the required data
comes directly from the cache. You may need to tune the cache percentages based on your
cluster configuration and the workload on specific nodes. Non-default allocations tend
to work better for nodes that run only CLDB and nodes that do not have CLDB but do have
a heavy HPE Ezmeral Data Fabric Database workload. Note the following
recommendations.
- For CLDB-only nodes, increase the size of the cache for Dir LRU to 40%: change
dir:15
to dir:40
A CLDB-only node is a file
server node that hosts only the CLDB volume mapr.cldb.internal
(no
user volume data is hosted on the node). Dir LRU is used to host B-tree pages.
- For non-CLDB nodes with no HPE Ezmeral Data Fabric Database workload, optimize
the cache to host as many file pages as possible. Change the value of the parameter
to:
inode:3:meta:6:small:27:dir:6
The remainder of the cache is used to cache file data pages.
Note: You need to restart MFS for the change in mfs.conf
to take
effect.