Configuring Secure Clusters for Cross-Cluster NFS Access
Describes how to manually set up cross-cluster NFS access.
About this task
HPE Ezmeral Data Fabric-NFS offers many usability and interoperability advantages to the customer, and makes big data radically easier and less expensive to use. In a secure environment, however, you must configure NFS carefully because the NFS protocol is inherently insecure. Running the NFS server on any cluster node might expose the file system to be world readable and writeable to any machine that knows the IP address of the cluster node running the NFS server and has access to the network, regardless of the permissions, passwords and other security mechanisms. At the minimum, you should configure iptables firewall rules for all the cluster nodes where the NFS server is running, to restrict incoming NFS traffic to authorized client IP addresses.
Configuring cross-cluster NFS access might expose the entire file system of the other
cluster to be world readable and writeable as well. Therefore, automated configuration for
cross-cluster NFS access is not available with the configure-crosscluster.sh
utility. You should manually configure
cross-cluster NFS access only if you are fully aware of the security risks, and taken
appropriate steps to mitigate the risks by securing both your NFS gateway, and incoming
client traffic.
- Run the NFS server on one cluster.
For this method, configure cross-cluster NFS security for the NFS gateway on one cluster, so that the NFS client can mount the file system once from the NFS gateway, and then access the file systems for both clusters.
- Run the NFS server on both clusters.
For this method, cross-cluster NFS configuration is not needed. The NFS client can mount the HPE Ezmeral Data Fabric file system individually for each cluster. This method requires that the NFS gateway to be run on each cluster, and the client performs one NFS mount for each NFS file system to be accessed.
The following procedure describes how to setup NFS for the first method:
Procedure
-
Log in to any node on the secure cluster where the NFS server is running.
In the rest of this procedure, this cluster is referred to as clusterA.cluster.com and the remote cluster is referred to as clusterB.cluster.com.
-
Set up the
/opt/mapr/conf/maprserverticket
file on clusterA.cluster.com to include the server ticket from clusterB.cluster.com. To set up: -
Verify data access on both clusters using NFS.
Users with access to the NFS servers must be able to access data in both clusters by providing the correct path. For example, users with NFS server access can verify access by running commands similar to the following:
# ls /mapr clusterA.cluster.com clusterB.cluster.com # ls /mapr/clusterB.cluster.com/ apps file CLUSTERB hbase opt tmp user var