Configure NFS Write Performance
Describes how to set the optimal value for outstanding Remote Procedure Call (RPC) requests to the NFS server.
The default Remote Procedure Call (RPC) requests configuration can negatively impact performance and memory. To avoid performance and memory issues, configure the number of outstanding RPC requests to the NFS server to be 128, for optimal performance. The NFS client uses this value to determine when to send requests to the NFS server, along with the number of parallel requests to send.
- If the value is too small, the NFS client does not send many parallel requests. This scenario results in decreased performance.
- If the value is too high, the NFS client sends a lot of parallel requests, but the NFS server discards some requests, as it has a limit on the number of requests it can handle. This scenario causes the NFS client to resend the requests, and negatively affects performance.
The kernel tunable value sunrpc.tcp_slot_table_entries
represents the
number of simultaneous RPC requests. The default value of this tunable is 16 (on Red Hat
versions prior to version 6.3). On Red Hat versions 6.3 and above, the default value of
this tunable is set at 65536. Increasing or decreasing this value to 128 (depending on the
Red Hat version in use), may improve write speeds. Use the command sysctl -w
sunrpc.tcp_slot_table_entries=128
to set the value. Add an entry to your
sysctl.conf
file to make the setting persist across reboots.
Perform the following steps as the root user, on each NFS client machine:
- Issue the following commands to create the
sunrpc.conf
file under/etc/modprobe.d
with the recommended configuration. These commands enable the configuration to persist after a reboot of the NFS client machine.echo "options sunrpc tcp_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf echo "options sunrpc tcp_max_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf
- Issue the following
echo
commands. These commands enable the configuration to take effect after you remount the NFS client to the NFS for the HPE Ezmeral Data Fabric gateway.echo 128 > /proc/sys/sunrpc/tcp_slot_table_entries echo 128 > /proc/sys/sunrpc/tcp_max_slot_table_entries
- Remount the NFS client to the NFS
for the HPE Ezmeral Data Fabric gateway. Mount the data-fabric NFS server with a rsize and wsize of
128K, as this value significantly cuts down NFS server requests for a given transfer,
and improves the overall performance. For example, the following commands unmount and
mount the NFS server, assuming that the cluster is mounted at
/mapr
.umount /mapr mount -o nolock,rsize=131072,wsize=131072 <hostname>:/mapr /mapr
- After rebooting the node, if the
/proc/sys/sunrpc
directory is not available, or ifrpcidmapd
is not running, start therpcidmapd
service, using the following command:service rpcidmapd start
.
/opt/mapr/logs/nfsserver.log
file:
ERROR nfsserver[38960] fs/nfsd/requesthandle.cc:791 0.0.0.0[0] cannot allocate more OncRpcContexts: [numDropped=2556001] dropping connection from nfsc=10.13.64.225:0
NFS for the HPE Ezmeral Data Fabric write performance varies between different Linux distributions. The recommended value of this tunable may have no effect, or even a negative effect on your particular cluster.