Copying Data Using NFS for the HPE Ezmeral Data Fabric

Describes how to copy files from one data-fabric cluster to another using NFS for the HPE Ezmeral Data Fabric.

If NFS for the HPE Ezmeral Data Fabric is installed on the Data Fabric cluster, you can mount the Data Fabric cluster to the HDFS cluster and then copy files from one cluster to the other using hadoop distcp. If you do not have NFS for the HPE Ezmeral Data Fabric installed and a mount point configured, see Accessing Data with NFS v3 and Managing the HPE Ezmeral Data Fabric NFS Service.

To perform a copy using distcp via NFS for the HPE Ezmeral Data Fabric, you need the following information:
  • <Data Fabric NFS Server> - the IP address or hostname of the NFS server in the data-fabric cluster
  • <maprfs_nfs_mount> - the NFS export mount point configured on the data-fabric cluster; default is /mapr
  • <hdfs_nfs_mount> - the NFS for the HPE Ezmeral Data Fabric mount point configured on the HDFS cluster
  • <NameNode> - the IP address or hostname of the NameNode in the HDFS cluster
  • <NameNode Port> - the port on the NameNode in the HDFS cluster
  • <HDFS path> - the path to the HDFS directory from which you plan to copy data
  • <Data Fabric file system path> - the path in the Data Fabric cluster to which you plan to copy HDFS data
To copy data from HDFS to the Data Fabric file system using NFS for the HPE Ezmeral Data Fabric, complete the following steps:
  1. Mount HDFS.
    Issue the following command to mount the Data Fabric cluster to the HDFS NFS for the HPE Ezmeral Data Fabric mount point:
    mount <Data Fabric NFS Server>:/<maprfs_nfs_mount> /<hdfs_nfs_mount>
    Example
    mount 10.10.100.175:/mapr /hdfsmount
  2. Copy data.
    1. Issue the following command to copy data from the HDFS cluster to the Data Fabric cluster:
      hadoop distcp hdfs://<NameNode>:<NameNode Port>/<HDFS path> file:///<hdfs_nfs_mount>/<MapR file system path>

      Example

      hadoop distcp hdfs://nn1:8020/user/sara/file.txt file:///hdfsmount/user/sara
    2. Issue the following command from the Data Fabric cluster to verify that the file was copied to the Data Fabric cluster:
      hadoop fs -ls /<MapR file system path>
      Example
      hadoop fs -ls /user/sara